2025-06-22 11:14:26.185236 | Job console starting 2025-06-22 11:14:26.201597 | Updating git repos 2025-06-22 11:14:26.282210 | Cloning repos into workspace 2025-06-22 11:14:26.484808 | Restoring repo states 2025-06-22 11:14:26.506525 | Merging changes 2025-06-22 11:14:26.506555 | Checking out repos 2025-06-22 11:14:26.754231 | Preparing playbooks 2025-06-22 11:14:27.435216 | Running Ansible setup 2025-06-22 11:14:31.792041 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-06-22 11:14:32.598491 | 2025-06-22 11:14:32.598623 | PLAY [Base pre] 2025-06-22 11:14:32.615088 | 2025-06-22 11:14:32.615213 | TASK [Setup log path fact] 2025-06-22 11:14:32.634052 | orchestrator | ok 2025-06-22 11:14:32.650873 | 2025-06-22 11:14:32.651012 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-22 11:14:32.689943 | orchestrator | ok 2025-06-22 11:14:32.701445 | 2025-06-22 11:14:32.701558 | TASK [emit-job-header : Print job information] 2025-06-22 11:14:32.740698 | # Job Information 2025-06-22 11:14:32.740871 | Ansible Version: 2.16.14 2025-06-22 11:14:32.740909 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-06-22 11:14:32.740944 | Pipeline: post 2025-06-22 11:14:32.740971 | Executor: 521e9411259a 2025-06-22 11:14:32.740998 | Triggered by: https://github.com/osism/testbed/commit/d2e1818e4c06761a6326584cff488f9c421ea258 2025-06-22 11:14:32.741021 | Event ID: 0a68d3e6-4f5a-11f0-92c9-1fae89cd208a 2025-06-22 11:14:32.747425 | 2025-06-22 11:14:32.747525 | LOOP [emit-job-header : Print node information] 2025-06-22 11:14:32.869793 | orchestrator | ok: 2025-06-22 11:14:32.869995 | orchestrator | # Node Information 2025-06-22 11:14:32.870028 | orchestrator | Inventory Hostname: orchestrator 2025-06-22 11:14:32.870053 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-06-22 11:14:32.870075 | orchestrator | Username: zuul-testbed03 2025-06-22 11:14:32.870095 | orchestrator | Distro: Debian 12.11 2025-06-22 11:14:32.870122 | orchestrator | Provider: static-testbed 2025-06-22 11:14:32.870144 | orchestrator | Region: 2025-06-22 11:14:32.870164 | orchestrator | Label: testbed-orchestrator 2025-06-22 11:14:32.870184 | orchestrator | Product Name: OpenStack Nova 2025-06-22 11:14:32.870202 | orchestrator | Interface IP: 81.163.193.140 2025-06-22 11:14:32.894788 | 2025-06-22 11:14:32.895043 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-06-22 11:14:33.377807 | orchestrator -> localhost | changed 2025-06-22 11:14:33.385501 | 2025-06-22 11:14:33.385619 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-06-22 11:14:34.571189 | orchestrator -> localhost | changed 2025-06-22 11:14:34.594288 | 2025-06-22 11:14:34.594433 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-06-22 11:14:34.920313 | orchestrator -> localhost | ok 2025-06-22 11:14:34.927048 | 2025-06-22 11:14:34.927153 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-06-22 11:14:34.945899 | orchestrator | ok 2025-06-22 11:14:34.963878 | orchestrator | included: /var/lib/zuul/builds/2016dcad747040a4b5a9e68a0799e111/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-06-22 11:14:34.971476 | 2025-06-22 11:14:34.971578 | TASK [add-build-sshkey : Create Temp SSH key] 2025-06-22 11:14:36.389793 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-06-22 11:14:36.389988 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/2016dcad747040a4b5a9e68a0799e111/work/2016dcad747040a4b5a9e68a0799e111_id_rsa 2025-06-22 11:14:36.390025 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/2016dcad747040a4b5a9e68a0799e111/work/2016dcad747040a4b5a9e68a0799e111_id_rsa.pub 2025-06-22 11:14:36.390052 | orchestrator -> localhost | The key fingerprint is: 2025-06-22 11:14:36.390079 | orchestrator -> localhost | SHA256:8YBfCmTb1Azx3M7nQjFMw1AvR1EapzVefet8ilGTWk4 zuul-build-sshkey 2025-06-22 11:14:36.390102 | orchestrator -> localhost | The key's randomart image is: 2025-06-22 11:14:36.390131 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-06-22 11:14:36.390153 | orchestrator -> localhost | | o +=.++ ++*| 2025-06-22 11:14:36.390174 | orchestrator -> localhost | | o = oo+.+.*=| 2025-06-22 11:14:36.390194 | orchestrator -> localhost | | + + + * =oo| 2025-06-22 11:14:36.390214 | orchestrator -> localhost | | o * o =E. | 2025-06-22 11:14:36.390233 | orchestrator -> localhost | | S . +*+. | 2025-06-22 11:14:36.390269 | orchestrator -> localhost | | .oo.o.| 2025-06-22 11:14:36.390292 | orchestrator -> localhost | | .o...| 2025-06-22 11:14:36.390313 | orchestrator -> localhost | | ... | 2025-06-22 11:14:36.390334 | orchestrator -> localhost | | | 2025-06-22 11:14:36.390355 | orchestrator -> localhost | +----[SHA256]-----+ 2025-06-22 11:14:36.390402 | orchestrator -> localhost | ok: Runtime: 0:00:00.793981 2025-06-22 11:14:36.397261 | 2025-06-22 11:14:36.397359 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-06-22 11:14:36.425149 | orchestrator | ok 2025-06-22 11:14:36.434883 | orchestrator | included: /var/lib/zuul/builds/2016dcad747040a4b5a9e68a0799e111/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-06-22 11:14:36.443669 | 2025-06-22 11:14:36.443750 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-06-22 11:14:36.466467 | orchestrator | skipping: Conditional result was False 2025-06-22 11:14:36.482637 | 2025-06-22 11:14:36.482740 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-06-22 11:14:37.514004 | orchestrator | changed 2025-06-22 11:14:37.522471 | 2025-06-22 11:14:37.522668 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-06-22 11:14:37.820146 | orchestrator | ok 2025-06-22 11:14:37.832120 | 2025-06-22 11:14:37.832299 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-06-22 11:14:38.225767 | orchestrator | ok 2025-06-22 11:14:38.235560 | 2025-06-22 11:14:38.235697 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-06-22 11:14:38.631131 | orchestrator | ok 2025-06-22 11:14:38.649152 | 2025-06-22 11:14:38.649338 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-06-22 11:14:38.674679 | orchestrator | skipping: Conditional result was False 2025-06-22 11:14:38.685001 | 2025-06-22 11:14:38.685126 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-06-22 11:14:39.236320 | orchestrator -> localhost | changed 2025-06-22 11:14:39.268745 | 2025-06-22 11:14:39.268881 | TASK [add-build-sshkey : Add back temp key] 2025-06-22 11:14:39.655584 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/2016dcad747040a4b5a9e68a0799e111/work/2016dcad747040a4b5a9e68a0799e111_id_rsa (zuul-build-sshkey) 2025-06-22 11:14:39.657231 | orchestrator -> localhost | ok: Runtime: 0:00:00.013416 2025-06-22 11:14:39.684433 | 2025-06-22 11:14:39.684755 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-06-22 11:14:40.156473 | orchestrator | ok 2025-06-22 11:14:40.169714 | 2025-06-22 11:14:40.169851 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-06-22 11:14:40.209095 | orchestrator | skipping: Conditional result was False 2025-06-22 11:14:40.264637 | 2025-06-22 11:14:40.264790 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-06-22 11:14:40.658937 | orchestrator | ok 2025-06-22 11:14:40.672870 | 2025-06-22 11:14:40.673014 | TASK [validate-host : Define zuul_info_dir fact] 2025-06-22 11:14:40.721772 | orchestrator | ok 2025-06-22 11:14:40.733684 | 2025-06-22 11:14:40.733854 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-06-22 11:14:41.092552 | orchestrator -> localhost | ok 2025-06-22 11:14:41.100246 | 2025-06-22 11:14:41.100386 | TASK [validate-host : Collect information about the host] 2025-06-22 11:14:42.303540 | orchestrator | ok 2025-06-22 11:14:42.321464 | 2025-06-22 11:14:42.321709 | TASK [validate-host : Sanitize hostname] 2025-06-22 11:14:42.392073 | orchestrator | ok 2025-06-22 11:14:42.400304 | 2025-06-22 11:14:42.400433 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-06-22 11:14:43.023998 | orchestrator -> localhost | changed 2025-06-22 11:14:43.039493 | 2025-06-22 11:14:43.042614 | TASK [validate-host : Collect information about zuul worker] 2025-06-22 11:14:43.489577 | orchestrator | ok 2025-06-22 11:14:43.495220 | 2025-06-22 11:14:43.495349 | TASK [validate-host : Write out all zuul information for each host] 2025-06-22 11:14:44.076025 | orchestrator -> localhost | changed 2025-06-22 11:14:44.098750 | 2025-06-22 11:14:44.098989 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-06-22 11:14:44.410527 | orchestrator | ok 2025-06-22 11:14:44.420986 | 2025-06-22 11:14:44.421136 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-06-22 11:15:18.962611 | orchestrator | changed: 2025-06-22 11:15:18.963120 | orchestrator | .d..t...... src/ 2025-06-22 11:15:18.963214 | orchestrator | .d..t...... src/github.com/ 2025-06-22 11:15:18.963300 | orchestrator | .d..t...... src/github.com/osism/ 2025-06-22 11:15:18.963359 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-06-22 11:15:18.963413 | orchestrator | RedHat.yml 2025-06-22 11:15:18.983791 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-06-22 11:15:18.983814 | orchestrator | RedHat.yml 2025-06-22 11:15:18.983880 | orchestrator | = 1.53.0"... 2025-06-22 11:15:33.079937 | orchestrator | 11:15:33.079 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-06-22 11:15:33.148884 | orchestrator | 11:15:33.148 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-06-22 11:15:34.207190 | orchestrator | 11:15:34.206 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.2.0... 2025-06-22 11:15:35.327559 | orchestrator | 11:15:35.327 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.2.0 (signed, key ID 4F80527A391BEFD2) 2025-06-22 11:15:36.475539 | orchestrator | 11:15:36.475 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-06-22 11:15:38.407484 | orchestrator | 11:15:38.407 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-06-22 11:15:39.567662 | orchestrator | 11:15:39.567 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-06-22 11:15:40.500824 | orchestrator | 11:15:40.500 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-06-22 11:15:40.500929 | orchestrator | 11:15:40.500 STDOUT terraform: Providers are signed by their developers. 2025-06-22 11:15:40.500954 | orchestrator | 11:15:40.500 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-06-22 11:15:40.500959 | orchestrator | 11:15:40.500 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-06-22 11:15:40.500966 | orchestrator | 11:15:40.500 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-06-22 11:15:40.501103 | orchestrator | 11:15:40.500 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-06-22 11:15:40.501112 | orchestrator | 11:15:40.501 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-06-22 11:15:40.501119 | orchestrator | 11:15:40.501 STDOUT terraform: you run "tofu init" in the future. 2025-06-22 11:15:40.501164 | orchestrator | 11:15:40.501 STDOUT terraform: OpenTofu has been successfully initialized! 2025-06-22 11:15:40.501221 | orchestrator | 11:15:40.501 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-06-22 11:15:40.501269 | orchestrator | 11:15:40.501 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-06-22 11:15:40.501290 | orchestrator | 11:15:40.501 STDOUT terraform: should now work. 2025-06-22 11:15:40.501339 | orchestrator | 11:15:40.501 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-06-22 11:15:40.501397 | orchestrator | 11:15:40.501 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-06-22 11:15:40.501441 | orchestrator | 11:15:40.501 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-06-22 11:15:40.637273 | orchestrator | 11:15:40.636 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-06-22 11:15:40.637335 | orchestrator | 11:15:40.636 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-06-22 11:15:40.827135 | orchestrator | 11:15:40.826 STDOUT terraform: Created and switched to workspace "ci"! 2025-06-22 11:15:40.827188 | orchestrator | 11:15:40.826 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-06-22 11:15:40.827195 | orchestrator | 11:15:40.827 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-06-22 11:15:40.827200 | orchestrator | 11:15:40.827 STDOUT terraform: for this configuration. 2025-06-22 11:15:40.978370 | orchestrator | 11:15:40.978 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-06-22 11:15:40.978438 | orchestrator | 11:15:40.978 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-06-22 11:15:41.087619 | orchestrator | 11:15:41.087 STDOUT terraform: ci.auto.tfvars 2025-06-22 11:15:41.476883 | orchestrator | 11:15:41.476 STDOUT terraform: default_custom.tf 2025-06-22 11:15:41.631347 | orchestrator | 11:15:41.631 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-06-22 11:15:42.465515 | orchestrator | 11:15:42.464 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-06-22 11:15:43.143590 | orchestrator | 11:15:43.143 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-06-22 11:15:43.442269 | orchestrator | 11:15:43.442 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-06-22 11:15:43.442348 | orchestrator | 11:15:43.442 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-06-22 11:15:43.442363 | orchestrator | 11:15:43.442 STDOUT terraform:  + create 2025-06-22 11:15:43.442375 | orchestrator | 11:15:43.442 STDOUT terraform:  <= read (data resources) 2025-06-22 11:15:43.442389 | orchestrator | 11:15:43.442 STDOUT terraform: OpenTofu will perform the following actions: 2025-06-22 11:15:43.442405 | orchestrator | 11:15:43.442 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-06-22 11:15:43.442420 | orchestrator | 11:15:43.442 STDOUT terraform:  # (config refers to values not yet known) 2025-06-22 11:15:43.442457 | orchestrator | 11:15:43.442 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-06-22 11:15:43.442492 | orchestrator | 11:15:43.442 STDOUT terraform:  + checksum = (known after apply) 2025-06-22 11:15:43.442524 | orchestrator | 11:15:43.442 STDOUT terraform:  + created_at = (known after apply) 2025-06-22 11:15:43.442556 | orchestrator | 11:15:43.442 STDOUT terraform:  + file = (known after apply) 2025-06-22 11:15:43.442589 | orchestrator | 11:15:43.442 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.442622 | orchestrator | 11:15:43.442 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 11:15:43.442656 | orchestrator | 11:15:43.442 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-22 11:15:43.442671 | orchestrator | 11:15:43.442 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-22 11:15:43.442686 | orchestrator | 11:15:43.442 STDOUT terraform:  + most_recent = true 2025-06-22 11:15:43.442726 | orchestrator | 11:15:43.442 STDOUT terraform:  + name = (known after apply) 2025-06-22 11:15:43.442756 | orchestrator | 11:15:43.442 STDOUT terraform:  + protected = (known after apply) 2025-06-22 11:15:43.442787 | orchestrator | 11:15:43.442 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.442818 | orchestrator | 11:15:43.442 STDOUT terraform:  + schema = (known after apply) 2025-06-22 11:15:43.442851 | orchestrator | 11:15:43.442 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-22 11:15:43.442869 | orchestrator | 11:15:43.442 STDOUT terraform:  + tags = (known after apply) 2025-06-22 11:15:43.442909 | orchestrator | 11:15:43.442 STDOUT terraform:  + updated_at = (known after apply) 2025-06-22 11:15:43.442925 | orchestrator | 11:15:43.442 STDOUT terraform:  } 2025-06-22 11:15:43.443001 | orchestrator | 11:15:43.442 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-06-22 11:15:43.443019 | orchestrator | 11:15:43.442 STDOUT terraform:  # (config refers to values not yet known) 2025-06-22 11:15:43.443060 | orchestrator | 11:15:43.443 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-06-22 11:15:43.443091 | orchestrator | 11:15:43.443 STDOUT terraform:  + checksum = (known after apply) 2025-06-22 11:15:43.443124 | orchestrator | 11:15:43.443 STDOUT terraform:  + created_at = (known after apply) 2025-06-22 11:15:43.443166 | orchestrator | 11:15:43.443 STDOUT terraform:  + file = (known after apply) 2025-06-22 11:15:43.443182 | orchestrator | 11:15:43.443 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.443895 | orchestrator | 11:15:43.443 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 11:15:43.443916 | orchestrator | 11:15:43.443 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-22 11:15:43.443927 | orchestrator | 11:15:43.443 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-22 11:15:43.443959 | orchestrator | 11:15:43.443 STDOUT terraform:  + most_recent = true 2025-06-22 11:15:43.443970 | orchestrator | 11:15:43.443 STDOUT terraform:  + name = (known after apply) 2025-06-22 11:15:43.443981 | orchestrator | 11:15:43.443 STDOUT terraform:  + protected = (known after apply) 2025-06-22 11:15:43.443992 | orchestrator | 11:15:43.443 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.444003 | orchestrator | 11:15:43.443 STDOUT terraform:  + schema = (known after apply) 2025-06-22 11:15:43.444014 | orchestrator | 11:15:43.443 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-22 11:15:43.444025 | orchestrator | 11:15:43.443 STDOUT terraform:  + tags = (known after apply) 2025-06-22 11:15:43.444036 | orchestrator | 11:15:43.443 STDOUT terraform:  + updated_at = (known after apply) 2025-06-22 11:15:43.444047 | orchestrator | 11:15:43.443 STDOUT terraform:  } 2025-06-22 11:15:43.444058 | orchestrator | 11:15:43.443 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-06-22 11:15:43.444082 | orchestrator | 11:15:43.443 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-06-22 11:15:43.444093 | orchestrator | 11:15:43.443 STDOUT terraform:  + content = (known after apply) 2025-06-22 11:15:43.444104 | orchestrator | 11:15:43.443 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-22 11:15:43.444115 | orchestrator | 11:15:43.443 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-22 11:15:43.444126 | orchestrator | 11:15:43.443 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-22 11:15:43.444138 | orchestrator | 11:15:43.443 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-22 11:15:43.444148 | orchestrator | 11:15:43.443 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-22 11:15:43.444159 | orchestrator | 11:15:43.443 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-22 11:15:43.444171 | orchestrator | 11:15:43.443 STDOUT terraform:  + directory_permission = "0777" 2025-06-22 11:15:43.444182 | orchestrator | 11:15:43.443 STDOUT terraform:  + file_permission = "0644" 2025-06-22 11:15:43.444193 | orchestrator | 11:15:43.443 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-06-22 11:15:43.444204 | orchestrator | 11:15:43.443 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.444215 | orchestrator | 11:15:43.443 STDOUT terraform:  } 2025-06-22 11:15:43.444231 | orchestrator | 11:15:43.443 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-06-22 11:15:43.444242 | orchestrator | 11:15:43.443 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-06-22 11:15:43.444253 | orchestrator | 11:15:43.443 STDOUT terraform:  + content = (known after apply) 2025-06-22 11:15:43.444264 | orchestrator | 11:15:43.443 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-22 11:15:43.444275 | orchestrator | 11:15:43.443 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-22 11:15:43.444286 | orchestrator | 11:15:43.444 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-22 11:15:43.444297 | orchestrator | 11:15:43.444 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-22 11:15:43.444308 | orchestrator | 11:15:43.444 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-22 11:15:43.444324 | orchestrator | 11:15:43.444 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-22 11:15:43.444336 | orchestrator | 11:15:43.444 STDOUT terraform:  + directory_permission = "0777" 2025-06-22 11:15:43.444346 | orchestrator | 11:15:43.444 STDOUT terraform:  + file_permission = "0644" 2025-06-22 11:15:43.444357 | orchestrator | 11:15:43.444 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-06-22 11:15:43.444372 | orchestrator | 11:15:43.444 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.444383 | orchestrator | 11:15:43.444 STDOUT terraform:  } 2025-06-22 11:15:43.444395 | orchestrator | 11:15:43.444 STDOUT terraform:  # local_file.inventory will be created 2025-06-22 11:15:43.444406 | orchestrator | 11:15:43.444 STDOUT terraform:  + resource "local_file" "inventory" { 2025-06-22 11:15:43.444417 | orchestrator | 11:15:43.444 STDOUT terraform:  + content = (known after apply) 2025-06-22 11:15:43.444438 | orchestrator | 11:15:43.444 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-22 11:15:43.444450 | orchestrator | 11:15:43.444 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-22 11:15:43.444464 | orchestrator | 11:15:43.444 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-22 11:15:43.444486 | orchestrator | 11:15:43.444 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-22 11:15:43.444509 | orchestrator | 11:15:43.444 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-22 11:15:43.446218 | orchestrator | 11:15:43.444 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-22 11:15:43.446240 | orchestrator | 11:15:43.444 STDOUT terraform:  + directory_permission = "0777" 2025-06-22 11:15:43.446245 | orchestrator | 11:15:43.444 STDOUT terraform:  + file_permission = "0644" 2025-06-22 11:15:43.446249 | orchestrator | 11:15:43.444 STDOUT terraform:  + filename = "inventory.ci" 2025-06-22 11:15:43.446263 | orchestrator | 11:15:43.444 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.446267 | orchestrator | 11:15:43.444 STDOUT terraform:  } 2025-06-22 11:15:43.446271 | orchestrator | 11:15:43.444 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-06-22 11:15:43.446275 | orchestrator | 11:15:43.444 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-06-22 11:15:43.446280 | orchestrator | 11:15:43.444 STDOUT terraform:  + content = (sensitive value) 2025-06-22 11:15:43.446284 | orchestrator | 11:15:43.444 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-22 11:15:43.446288 | orchestrator | 11:15:43.444 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-22 11:15:43.446292 | orchestrator | 11:15:43.444 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-22 11:15:43.446296 | orchestrator | 11:15:43.444 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-22 11:15:43.446300 | orchestrator | 11:15:43.444 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-22 11:15:43.446304 | orchestrator | 11:15:43.444 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-22 11:15:43.446307 | orchestrator | 11:15:43.444 STDOUT terraform:  + directory_permission = "0700" 2025-06-22 11:15:43.446311 | orchestrator | 11:15:43.444 STDOUT terraform:  + file_permission = "0600" 2025-06-22 11:15:43.446315 | orchestrator | 11:15:43.445 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-06-22 11:15:43.446324 | orchestrator | 11:15:43.445 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.446328 | orchestrator | 11:15:43.445 STDOUT terraform:  } 2025-06-22 11:15:43.446332 | orchestrator | 11:15:43.445 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-06-22 11:15:43.446336 | orchestrator | 11:15:43.445 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-06-22 11:15:43.446340 | orchestrator | 11:15:43.445 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.446344 | orchestrator | 11:15:43.445 STDOUT terraform:  } 2025-06-22 11:15:43.446347 | orchestrator | 11:15:43.445 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-06-22 11:15:43.446357 | orchestrator | 11:15:43.445 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-06-22 11:15:43.446361 | orchestrator | 11:15:43.445 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 11:15:43.446365 | orchestrator | 11:15:43.445 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.446369 | orchestrator | 11:15:43.445 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.446373 | orchestrator | 11:15:43.445 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 11:15:43.446376 | orchestrator | 11:15:43.445 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 11:15:43.446380 | orchestrator | 11:15:43.445 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-06-22 11:15:43.446384 | orchestrator | 11:15:43.445 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.446388 | orchestrator | 11:15:43.445 STDOUT terraform:  + size = 80 2025-06-22 11:15:43.446392 | orchestrator | 11:15:43.445 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 11:15:43.446396 | orchestrator | 11:15:43.445 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 11:15:43.446399 | orchestrator | 11:15:43.445 STDOUT terraform:  } 2025-06-22 11:15:43.446408 | orchestrator | 11:15:43.445 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-06-22 11:15:43.446412 | orchestrator | 11:15:43.445 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 11:15:43.446416 | orchestrator | 11:15:43.445 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 11:15:43.446420 | orchestrator | 11:15:43.445 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.446424 | orchestrator | 11:15:43.445 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.446427 | orchestrator | 11:15:43.445 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 11:15:43.446431 | orchestrator | 11:15:43.445 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 11:15:43.446435 | orchestrator | 11:15:43.445 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-06-22 11:15:43.446439 | orchestrator | 11:15:43.445 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.446443 | orchestrator | 11:15:43.445 STDOUT terraform:  + size = 80 2025-06-22 11:15:43.446446 | orchestrator | 11:15:43.445 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 11:15:43.446450 | orchestrator | 11:15:43.445 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 11:15:43.446454 | orchestrator | 11:15:43.445 STDOUT terraform:  } 2025-06-22 11:15:43.446458 | orchestrator | 11:15:43.445 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-06-22 11:15:43.446462 | orchestrator | 11:15:43.445 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 11:15:43.446466 | orchestrator | 11:15:43.445 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 11:15:43.446477 | orchestrator | 11:15:43.445 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.446481 | orchestrator | 11:15:43.446 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.446485 | orchestrator | 11:15:43.446 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 11:15:43.446489 | orchestrator | 11:15:43.446 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 11:15:43.446495 | orchestrator | 11:15:43.446 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-06-22 11:15:43.446499 | orchestrator | 11:15:43.446 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.446503 | orchestrator | 11:15:43.446 STDOUT terraform:  + size = 80 2025-06-22 11:15:43.446506 | orchestrator | 11:15:43.446 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 11:15:43.446510 | orchestrator | 11:15:43.446 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 11:15:43.446514 | orchestrator | 11:15:43.446 STDOUT terraform:  } 2025-06-22 11:15:43.446518 | orchestrator | 11:15:43.446 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-06-22 11:15:43.446522 | orchestrator | 11:15:43.446 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 11:15:43.446525 | orchestrator | 11:15:43.446 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 11:15:43.446529 | orchestrator | 11:15:43.446 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.446535 | orchestrator | 11:15:43.446 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.446539 | orchestrator | 11:15:43.446 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 11:15:43.446543 | orchestrator | 11:15:43.446 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 11:15:43.446547 | orchestrator | 11:15:43.446 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-06-22 11:15:43.446576 | orchestrator | 11:15:43.446 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.446594 | orchestrator | 11:15:43.446 STDOUT terraform:  + size = 80 2025-06-22 11:15:43.446617 | orchestrator | 11:15:43.446 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 11:15:43.446644 | orchestrator | 11:15:43.446 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 11:15:43.446651 | orchestrator | 11:15:43.446 STDOUT terraform:  } 2025-06-22 11:15:43.446704 | orchestrator | 11:15:43.446 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-06-22 11:15:43.446739 | orchestrator | 11:15:43.446 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 11:15:43.446772 | orchestrator | 11:15:43.446 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 11:15:43.446811 | orchestrator | 11:15:43.446 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.446830 | orchestrator | 11:15:43.446 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.446867 | orchestrator | 11:15:43.446 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 11:15:43.446903 | orchestrator | 11:15:43.446 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 11:15:43.446971 | orchestrator | 11:15:43.446 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-06-22 11:15:43.446989 | orchestrator | 11:15:43.446 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.447010 | orchestrator | 11:15:43.446 STDOUT terraform:  + size = 80 2025-06-22 11:15:43.447042 | orchestrator | 11:15:43.447 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 11:15:43.447060 | orchestrator | 11:15:43.447 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 11:15:43.447067 | orchestrator | 11:15:43.447 STDOUT terraform:  } 2025-06-22 11:15:43.447112 | orchestrator | 11:15:43.447 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-06-22 11:15:43.447156 | orchestrator | 11:15:43.447 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 11:15:43.447189 | orchestrator | 11:15:43.447 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 11:15:43.447219 | orchestrator | 11:15:43.447 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.447248 | orchestrator | 11:15:43.447 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.447289 | orchestrator | 11:15:43.447 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 11:15:43.447319 | orchestrator | 11:15:43.447 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 11:15:43.447364 | orchestrator | 11:15:43.447 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-06-22 11:15:43.447398 | orchestrator | 11:15:43.447 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.447418 | orchestrator | 11:15:43.447 STDOUT terraform:  + size = 80 2025-06-22 11:15:43.447442 | orchestrator | 11:15:43.447 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 11:15:43.447467 | orchestrator | 11:15:43.447 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 11:15:43.447473 | orchestrator | 11:15:43.447 STDOUT terraform:  } 2025-06-22 11:15:43.447521 | orchestrator | 11:15:43.447 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-06-22 11:15:43.447566 | orchestrator | 11:15:43.447 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 11:15:43.447600 | orchestrator | 11:15:43.447 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 11:15:43.447624 | orchestrator | 11:15:43.447 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.447661 | orchestrator | 11:15:43.447 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.447706 | orchestrator | 11:15:43.447 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 11:15:43.447727 | orchestrator | 11:15:43.447 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 11:15:43.447776 | orchestrator | 11:15:43.447 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-06-22 11:15:43.447806 | orchestrator | 11:15:43.447 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.447825 | orchestrator | 11:15:43.447 STDOUT terraform:  + size = 80 2025-06-22 11:15:43.447848 | orchestrator | 11:15:43.447 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 11:15:43.447874 | orchestrator | 11:15:43.447 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 11:15:43.447880 | orchestrator | 11:15:43.447 STDOUT terraform:  } 2025-06-22 11:15:43.447923 | orchestrator | 11:15:43.447 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-06-22 11:15:43.447998 | orchestrator | 11:15:43.447 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 11:15:43.448034 | orchestrator | 11:15:43.447 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 11:15:43.448052 | orchestrator | 11:15:43.448 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.448088 | orchestrator | 11:15:43.448 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.448122 | orchestrator | 11:15:43.448 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 11:15:43.448159 | orchestrator | 11:15:43.448 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-06-22 11:15:43.448197 | orchestrator | 11:15:43.448 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.448203 | orchestrator | 11:15:43.448 STDOUT terraform:  + size = 20 2025-06-22 11:15:43.448233 | orchestrator | 11:15:43.448 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 11:15:43.448257 | orchestrator | 11:15:43.448 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 11:15:43.448271 | orchestrator | 11:15:43.448 STDOUT terraform:  } 2025-06-22 11:15:43.448310 | orchestrator | 11:15:43.448 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-06-22 11:15:43.448368 | orchestrator | 11:15:43.448 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 11:15:43.448386 | orchestrator | 11:15:43.448 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 11:15:43.448410 | orchestrator | 11:15:43.448 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.448451 | orchestrator | 11:15:43.448 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.448480 | orchestrator | 11:15:43.448 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 11:15:43.448532 | orchestrator | 11:15:43.448 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-06-22 11:15:43.448553 | orchestrator | 11:15:43.448 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.448573 | orchestrator | 11:15:43.448 STDOUT terraform:  + size = 20 2025-06-22 11:15:43.448616 | orchestrator | 11:15:43.448 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 11:15:43.448622 | orchestrator | 11:15:43.448 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 11:15:43.448627 | orchestrator | 11:15:43.448 STDOUT terraform:  } 2025-06-22 11:15:43.448668 | orchestrator | 11:15:43.448 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-06-22 11:15:43.448710 | orchestrator | 11:15:43.448 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 11:15:43.448743 | orchestrator | 11:15:43.448 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 11:15:43.448761 | orchestrator | 11:15:43.448 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.448797 | orchestrator | 11:15:43.448 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.448830 | orchestrator | 11:15:43.448 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 11:15:43.448959 | orchestrator | 11:15:43.448 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-06-22 11:15:43.448967 | orchestrator | 11:15:43.448 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.448989 | orchestrator | 11:15:43.448 STDOUT terraform:  + size = 20 2025-06-22 11:15:43.449014 | orchestrator | 11:15:43.448 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 11:15:43.449039 | orchestrator | 11:15:43.449 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 11:15:43.449045 | orchestrator | 11:15:43.449 STDOUT terraform:  } 2025-06-22 11:15:43.449093 | orchestrator | 11:15:43.449 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-06-22 11:15:43.449136 | orchestrator | 11:15:43.449 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 11:15:43.449170 | orchestrator | 11:15:43.449 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 11:15:43.449202 | orchestrator | 11:15:43.449 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.449230 | orchestrator | 11:15:43.449 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.449272 | orchestrator | 11:15:43.449 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 11:15:43.449303 | orchestrator | 11:15:43.449 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-06-22 11:15:43.449337 | orchestrator | 11:15:43.449 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.449370 | orchestrator | 11:15:43.449 STDOUT terraform:  + size = 20 2025-06-22 11:15:43.449377 | orchestrator | 11:15:43.449 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 11:15:43.449400 | orchestrator | 11:15:43.449 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 11:15:43.449406 | orchestrator | 11:15:43.449 STDOUT terraform:  } 2025-06-22 11:15:43.449453 | orchestrator | 11:15:43.449 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-06-22 11:15:43.449495 | orchestrator | 11:15:43.449 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 11:15:43.449532 | orchestrator | 11:15:43.449 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 11:15:43.449550 | orchestrator | 11:15:43.449 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.449585 | orchestrator | 11:15:43.449 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.449619 | orchestrator | 11:15:43.449 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 11:15:43.449660 | orchestrator | 11:15:43.449 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-06-22 11:15:43.449693 | orchestrator | 11:15:43.449 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.449704 | orchestrator | 11:15:43.449 STDOUT terraform:  + size = 20 2025-06-22 11:15:43.449725 | orchestrator | 11:15:43.449 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 11:15:43.449749 | orchestrator | 11:15:43.449 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 11:15:43.449755 | orchestrator | 11:15:43.449 STDOUT terraform:  } 2025-06-22 11:15:43.449801 | orchestrator | 11:15:43.449 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-06-22 11:15:43.449857 | orchestrator | 11:15:43.449 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 11:15:43.449878 | orchestrator | 11:15:43.449 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 11:15:43.449901 | orchestrator | 11:15:43.449 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.449949 | orchestrator | 11:15:43.449 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.449993 | orchestrator | 11:15:43.449 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 11:15:43.455237 | orchestrator | 11:15:43.449 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-06-22 11:15:43.455256 | orchestrator | 11:15:43.455 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.455262 | orchestrator | 11:15:43.455 STDOUT terraform:  + size = 20 2025-06-22 11:15:43.455267 | orchestrator | 11:15:43.455 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 11:15:43.455272 | orchestrator | 11:15:43.455 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 11:15:43.455277 | orchestrator | 11:15:43.455 STDOUT terraform:  } 2025-06-22 11:15:43.455285 | orchestrator | 11:15:43.455 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-06-22 11:15:43.455291 | orchestrator | 11:15:43.455 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 11:15:43.455341 | orchestrator | 11:15:43.455 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 11:15:43.455350 | orchestrator | 11:15:43.455 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.455412 | orchestrator | 11:15:43.455 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.455421 | orchestrator | 11:15:43.455 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 11:15:43.455470 | orchestrator | 11:15:43.455 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-06-22 11:15:43.455503 | orchestrator | 11:15:43.455 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.455512 | orchestrator | 11:15:43.455 STDOUT terraform:  + size = 20 2025-06-22 11:15:43.455554 | orchestrator | 11:15:43.455 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 11:15:43.455563 | orchestrator | 11:15:43.455 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 11:15:43.455583 | orchestrator | 11:15:43.455 STDOUT terraform:  } 2025-06-22 11:15:43.455636 | orchestrator | 11:15:43.455 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-06-22 11:15:43.455677 | orchestrator | 11:15:43.455 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 11:15:43.455694 | orchestrator | 11:15:43.455 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 11:15:43.455736 | orchestrator | 11:15:43.455 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.455745 | orchestrator | 11:15:43.455 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.455791 | orchestrator | 11:15:43.455 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 11:15:43.455833 | orchestrator | 11:15:43.455 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-06-22 11:15:43.455875 | orchestrator | 11:15:43.455 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.455886 | orchestrator | 11:15:43.455 STDOUT terraform:  + size = 20 2025-06-22 11:15:43.455920 | orchestrator | 11:15:43.455 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 11:15:43.455928 | orchestrator | 11:15:43.455 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 11:15:43.455999 | orchestrator | 11:15:43.455 STDOUT terraform:  } 2025-06-22 11:15:43.456032 | orchestrator | 11:15:43.455 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-06-22 11:15:43.456088 | orchestrator | 11:15:43.456 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 11:15:43.456100 | orchestrator | 11:15:43.456 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 11:15:43.456129 | orchestrator | 11:15:43.456 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.456162 | orchestrator | 11:15:43.456 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.456204 | orchestrator | 11:15:43.456 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 11:15:43.456235 | orchestrator | 11:15:43.456 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-06-22 11:15:43.456295 | orchestrator | 11:15:43.456 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.456302 | orchestrator | 11:15:43.456 STDOUT terraform:  + size = 20 2025-06-22 11:15:43.456312 | orchestrator | 11:15:43.456 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 11:15:43.456341 | orchestrator | 11:15:43.456 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 11:15:43.456353 | orchestrator | 11:15:43.456 STDOUT terraform:  } 2025-06-22 11:15:43.456408 | orchestrator | 11:15:43.456 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-06-22 11:15:43.456442 | orchestrator | 11:15:43.456 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-06-22 11:15:43.456475 | orchestrator | 11:15:43.456 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 11:15:43.456508 | orchestrator | 11:15:43.456 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 11:15:43.456554 | orchestrator | 11:15:43.456 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 11:15:43.456566 | orchestrator | 11:15:43.456 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.456606 | orchestrator | 11:15:43.456 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.456623 | orchestrator | 11:15:43.456 STDOUT terraform:  + config_drive = true 2025-06-22 11:15:43.456662 | orchestrator | 11:15:43.456 STDOUT terraform:  + created = (known after apply) 2025-06-22 11:15:43.456713 | orchestrator | 11:15:43.456 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 11:15:43.456719 | orchestrator | 11:15:43.456 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-06-22 11:15:43.456725 | orchestrator | 11:15:43.456 STDOUT terraform:  + force_delete = false 2025-06-22 11:15:43.456797 | orchestrator | 11:15:43.456 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 11:15:43.456803 | orchestrator | 11:15:43.456 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.456849 | orchestrator | 11:15:43.456 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 11:15:43.456857 | orchestrator | 11:15:43.456 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 11:15:43.456907 | orchestrator | 11:15:43.456 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 11:15:43.456915 | orchestrator | 11:15:43.456 STDOUT terraform:  + name = "testbed-manager" 2025-06-22 11:15:43.456987 | orchestrator | 11:15:43.456 STDOUT terraform:  + power_state = "active" 2025-06-22 11:15:43.456993 | orchestrator | 11:15:43.456 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.457000 | orchestrator | 11:15:43.456 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 11:15:43.457044 | orchestrator | 11:15:43.456 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 11:15:43.457056 | orchestrator | 11:15:43.457 STDOUT terraform:  + updated = (known after apply) 2025-06-22 11:15:43.457097 | orchestrator | 11:15:43.457 STDOUT terraform:  + user_data = (known after apply) 2025-06-22 11:15:43.457105 | orchestrator | 11:15:43.457 STDOUT terraform:  + block_device { 2025-06-22 11:15:43.457136 | orchestrator | 11:15:43.457 STDOUT terraform:  + boot_index = 0 2025-06-22 11:15:43.457168 | orchestrator | 11:15:43.457 STDOUT terraform:  + delete_on_termination = false 2025-06-22 11:15:43.457207 | orchestrator | 11:15:43.457 STDOUT terraform:  + destination_type = "volume" 2025-06-22 11:15:43.457217 | orchestrator | 11:15:43.457 STDOUT terraform:  + multiattach = false 2025-06-22 11:15:43.457271 | orchestrator | 11:15:43.457 STDOUT terraform:  + source_type = "volume" 2025-06-22 11:15:43.457278 | orchestrator | 11:15:43.457 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 11:15:43.457288 | orchestrator | 11:15:43.457 STDOUT terraform:  } 2025-06-22 11:15:43.457318 | orchestrator | 11:15:43.457 STDOUT terraform:  + network { 2025-06-22 11:15:43.457327 | orchestrator | 11:15:43.457 STDOUT terraform:  + access_network = false 2025-06-22 11:15:43.457385 | orchestrator | 11:15:43.457 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 11:15:43.457394 | orchestrator | 11:15:43.457 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 11:15:43.457402 | orchestrator | 11:15:43.457 STDOUT terraform:  + mac = (known after apply) 2025-06-22 11:15:43.457442 | orchestrator | 11:15:43.457 STDOUT terraform:  + name = (known after apply) 2025-06-22 11:15:43.457488 | orchestrator | 11:15:43.457 STDOUT terraform:  + port = (known after apply) 2025-06-22 11:15:43.457500 | orchestrator | 11:15:43.457 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 11:15:43.457506 | orchestrator | 11:15:43.457 STDOUT terraform:  } 2025-06-22 11:15:43.457551 | orchestrator | 11:15:43.457 STDOUT terraform:  } 2025-06-22 11:15:43.457639 | orchestrator | 11:15:43.457 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-06-22 11:15:43.457686 | orchestrator | 11:15:43.457 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 11:15:43.457698 | orchestrator | 11:15:43.457 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 11:15:43.457761 | orchestrator | 11:15:43.457 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 11:15:43.457773 | orchestrator | 11:15:43.457 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 11:15:43.457829 | orchestrator | 11:15:43.457 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.457836 | orchestrator | 11:15:43.457 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.457846 | orchestrator | 11:15:43.457 STDOUT terraform:  + config_drive = true 2025-06-22 11:15:43.457874 | orchestrator | 11:15:43.457 STDOUT terraform:  + created = (known after apply) 2025-06-22 11:15:43.457924 | orchestrator | 11:15:43.457 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 11:15:43.457951 | orchestrator | 11:15:43.457 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 11:15:43.457988 | orchestrator | 11:15:43.457 STDOUT terraform:  + force_delete = false 2025-06-22 11:15:43.458045 | orchestrator | 11:15:43.457 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 11:15:43.458087 | orchestrator | 11:15:43.458 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.458116 | orchestrator | 11:15:43.458 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 11:15:43.458153 | orchestrator | 11:15:43.458 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 11:15:43.458179 | orchestrator | 11:15:43.458 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 11:15:43.458213 | orchestrator | 11:15:43.458 STDOUT terraform:  + name = "testbed-node-0" 2025-06-22 11:15:43.458237 | orchestrator | 11:15:43.458 STDOUT terraform:  + power_state = "active" 2025-06-22 11:15:43.458276 | orchestrator | 11:15:43.458 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.458314 | orchestrator | 11:15:43.458 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 11:15:43.458334 | orchestrator | 11:15:43.458 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 11:15:43.458386 | orchestrator | 11:15:43.458 STDOUT terraform:  + updated = (known after apply) 2025-06-22 11:15:43.458421 | orchestrator | 11:15:43.458 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 11:15:43.458430 | orchestrator | 11:15:43.458 STDOUT terraform:  + block_device { 2025-06-22 11:15:43.458448 | orchestrator | 11:15:43.458 STDOUT terraform:  + boot_index = 0 2025-06-22 11:15:43.458492 | orchestrator | 11:15:43.458 STDOUT terraform:  + delete_on_termination = false 2025-06-22 11:15:43.458517 | orchestrator | 11:15:43.458 STDOUT terraform:  + destination_type = "volume" 2025-06-22 11:15:43.458524 | orchestrator | 11:15:43.458 STDOUT terraform:  + multiattach = false 2025-06-22 11:15:43.458580 | orchestrator | 11:15:43.458 STDOUT terraform:  + source_type = "volume" 2025-06-22 11:15:43.458589 | orchestrator | 11:15:43.458 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 11:15:43.458627 | orchestrator | 11:15:43.458 STDOUT terraform:  } 2025-06-22 11:15:43.458640 | orchestrator | 11:15:43.458 STDOUT terraform:  + network { 2025-06-22 11:15:43.458650 | orchestrator | 11:15:43.458 STDOUT terraform:  + access_network = false 2025-06-22 11:15:43.458665 | orchestrator | 11:15:43.458 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 11:15:43.458726 | orchestrator | 11:15:43.458 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 11:15:43.458733 | orchestrator | 11:15:43.458 STDOUT terraform:  + mac = (known after apply) 2025-06-22 11:15:43.458759 | orchestrator | 11:15:43.458 STDOUT terraform:  + name = (known after apply) 2025-06-22 11:15:43.458820 | orchestrator | 11:15:43.458 STDOUT terraform:  + port = (known after apply) 2025-06-22 11:15:43.458826 | orchestrator | 11:15:43.458 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 11:15:43.458831 | orchestrator | 11:15:43.458 STDOUT terraform:  } 2025-06-22 11:15:43.458842 | orchestrator | 11:15:43.458 STDOUT terraform:  } 2025-06-22 11:15:43.458880 | orchestrator | 11:15:43.458 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-06-22 11:15:43.458969 | orchestrator | 11:15:43.458 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 11:15:43.458976 | orchestrator | 11:15:43.458 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 11:15:43.458984 | orchestrator | 11:15:43.458 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 11:15:43.459041 | orchestrator | 11:15:43.458 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 11:15:43.459049 | orchestrator | 11:15:43.459 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.459091 | orchestrator | 11:15:43.459 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.459097 | orchestrator | 11:15:43.459 STDOUT terraform:  + config_drive = true 2025-06-22 11:15:43.459150 | orchestrator | 11:15:43.459 STDOUT terraform:  + created = (known after apply) 2025-06-22 11:15:43.459158 | orchestrator | 11:15:43.459 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 11:15:43.459184 | orchestrator | 11:15:43.459 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 11:15:43.459220 | orchestrator | 11:15:43.459 STDOUT terraform:  + force_delete = false 2025-06-22 11:15:43.459260 | orchestrator | 11:15:43.459 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 11:15:43.459278 | orchestrator | 11:15:43.459 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.459310 | orchestrator | 11:15:43.459 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 11:15:43.459356 | orchestrator | 11:15:43.459 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 11:15:43.459364 | orchestrator | 11:15:43.459 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 11:15:43.459402 | orchestrator | 11:15:43.459 STDOUT terraform:  + name = "testbed-node-1" 2025-06-22 11:15:43.459414 | orchestrator | 11:15:43.459 STDOUT terraform:  + power_state = "active" 2025-06-22 11:15:43.459464 | orchestrator | 11:15:43.459 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.459474 | orchestrator | 11:15:43.459 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 11:15:43.459505 | orchestrator | 11:15:43.459 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 11:15:43.459554 | orchestrator | 11:15:43.459 STDOUT terraform:  + updated = (known after apply) 2025-06-22 11:15:43.459612 | orchestrator | 11:15:43.459 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 11:15:43.459622 | orchestrator | 11:15:43.459 STDOUT terraform:  + block_device { 2025-06-22 11:15:43.459629 | orchestrator | 11:15:43.459 STDOUT terraform:  + boot_index = 0 2025-06-22 11:15:43.459663 | orchestrator | 11:15:43.459 STDOUT terraform:  + delete_on_termination = false 2025-06-22 11:15:43.459701 | orchestrator | 11:15:43.459 STDOUT terraform:  + destination_type = "volume" 2025-06-22 11:15:43.459709 | orchestrator | 11:15:43.459 STDOUT terraform:  + multiattach = false 2025-06-22 11:15:43.459730 | orchestrator | 11:15:43.459 STDOUT terraform:  + source_type = "volume" 2025-06-22 11:15:43.459786 | orchestrator | 11:15:43.459 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 11:15:43.459792 | orchestrator | 11:15:43.459 STDOUT terraform:  } 2025-06-22 11:15:43.459799 | orchestrator | 11:15:43.459 STDOUT terraform:  + network { 2025-06-22 11:15:43.459806 | orchestrator | 11:15:43.459 STDOUT terraform:  + access_network = false 2025-06-22 11:15:43.459851 | orchestrator | 11:15:43.459 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 11:15:43.459862 | orchestrator | 11:15:43.459 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 11:15:43.459915 | orchestrator | 11:15:43.459 STDOUT terraform:  + mac = (known after apply) 2025-06-22 11:15:43.459923 | orchestrator | 11:15:43.459 STDOUT terraform:  + name = (known after apply) 2025-06-22 11:15:43.459993 | orchestrator | 11:15:43.459 STDOUT terraform:  + port = (known after apply) 2025-06-22 11:15:43.462231 | orchestrator | 11:15:43.459 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 11:15:43.462296 | orchestrator | 11:15:43.462 STDOUT terraform:  } 2025-06-22 11:15:43.462310 | orchestrator | 11:15:43.462 STDOUT terraform:  } 2025-06-22 11:15:43.462373 | orchestrator | 11:15:43.462 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-06-22 11:15:43.462431 | orchestrator | 11:15:43.462 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 11:15:43.462485 | orchestrator | 11:15:43.462 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 11:15:43.462530 | orchestrator | 11:15:43.462 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 11:15:43.462576 | orchestrator | 11:15:43.462 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 11:15:43.462635 | orchestrator | 11:15:43.462 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.462657 | orchestrator | 11:15:43.462 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.462680 | orchestrator | 11:15:43.462 STDOUT terraform:  + config_drive = true 2025-06-22 11:15:43.462729 | orchestrator | 11:15:43.462 STDOUT terraform:  + created = (known after apply) 2025-06-22 11:15:43.462780 | orchestrator | 11:15:43.462 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 11:15:43.462823 | orchestrator | 11:15:43.462 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 11:15:43.462830 | orchestrator | 11:15:43.462 STDOUT terraform:  + force_delete = false 2025-06-22 11:15:43.462874 | orchestrator | 11:15:43.462 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 11:15:43.462918 | orchestrator | 11:15:43.462 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.462977 | orchestrator | 11:15:43.462 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 11:15:43.463024 | orchestrator | 11:15:43.462 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 11:15:43.463052 | orchestrator | 11:15:43.463 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 11:15:43.463099 | orchestrator | 11:15:43.463 STDOUT terraform:  + name = "testbed-node-2" 2025-06-22 11:15:43.463122 | orchestrator | 11:15:43.463 STDOUT terraform:  + power_state = "active" 2025-06-22 11:15:43.463166 | orchestrator | 11:15:43.463 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.463215 | orchestrator | 11:15:43.463 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 11:15:43.463237 | orchestrator | 11:15:43.463 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 11:15:43.463283 | orchestrator | 11:15:43.463 STDOUT terraform:  + updated = (known after apply) 2025-06-22 11:15:43.463357 | orchestrator | 11:15:43.463 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 11:15:43.463364 | orchestrator | 11:15:43.463 STDOUT terraform:  + block_device { 2025-06-22 11:15:43.463396 | orchestrator | 11:15:43.463 STDOUT terraform:  + boot_index = 0 2025-06-22 11:15:43.463431 | orchestrator | 11:15:43.463 STDOUT terraform:  + delete_on_termination = false 2025-06-22 11:15:43.463468 | orchestrator | 11:15:43.463 STDOUT terraform:  + destination_type = "volume" 2025-06-22 11:15:43.463502 | orchestrator | 11:15:43.463 STDOUT terraform:  + multiattach = false 2025-06-22 11:15:43.463541 | orchestrator | 11:15:43.463 STDOUT terraform:  + source_type = "volume" 2025-06-22 11:15:43.463588 | orchestrator | 11:15:43.463 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 11:15:43.463609 | orchestrator | 11:15:43.463 STDOUT terraform:  } 2025-06-22 11:15:43.463614 | orchestrator | 11:15:43.463 STDOUT terraform:  + network { 2025-06-22 11:15:43.463649 | orchestrator | 11:15:43.463 STDOUT terraform:  + access_network = false 2025-06-22 11:15:43.463681 | orchestrator | 11:15:43.463 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 11:15:43.463726 | orchestrator | 11:15:43.463 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 11:15:43.463764 | orchestrator | 11:15:43.463 STDOUT terraform:  + mac = (known after apply) 2025-06-22 11:15:43.463797 | orchestrator | 11:15:43.463 STDOUT terraform:  + name = (known after apply) 2025-06-22 11:15:43.463834 | orchestrator | 11:15:43.463 STDOUT terraform:  + port = (known after apply) 2025-06-22 11:15:43.463872 | orchestrator | 11:15:43.463 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 11:15:43.463898 | orchestrator | 11:15:43.463 STDOUT terraform:  } 2025-06-22 11:15:43.463906 | orchestrator | 11:15:43.463 STDOUT terraform:  } 2025-06-22 11:15:43.463973 | orchestrator | 11:15:43.463 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-06-22 11:15:43.464022 | orchestrator | 11:15:43.463 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 11:15:43.464066 | orchestrator | 11:15:43.464 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 11:15:43.464110 | orchestrator | 11:15:43.464 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 11:15:43.464152 | orchestrator | 11:15:43.464 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 11:15:43.464194 | orchestrator | 11:15:43.464 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.464222 | orchestrator | 11:15:43.464 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.464250 | orchestrator | 11:15:43.464 STDOUT terraform:  + config_drive = true 2025-06-22 11:15:43.464320 | orchestrator | 11:15:43.464 STDOUT terraform:  + created = (known after apply) 2025-06-22 11:15:43.464328 | orchestrator | 11:15:43.464 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 11:15:43.464369 | orchestrator | 11:15:43.464 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 11:15:43.464397 | orchestrator | 11:15:43.464 STDOUT terraform:  + force_delete = false 2025-06-22 11:15:43.464439 | orchestrator | 11:15:43.464 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 11:15:43.464492 | orchestrator | 11:15:43.464 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.464529 | orchestrator | 11:15:43.464 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 11:15:43.464571 | orchestrator | 11:15:43.464 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 11:15:43.464602 | orchestrator | 11:15:43.464 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 11:15:43.464642 | orchestrator | 11:15:43.464 STDOUT terraform:  + name = "testbed-node-3" 2025-06-22 11:15:43.464669 | orchestrator | 11:15:43.464 STDOUT terraform:  + power_state = "active" 2025-06-22 11:15:43.464717 | orchestrator | 11:15:43.464 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.464758 | orchestrator | 11:15:43.464 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 11:15:43.464786 | orchestrator | 11:15:43.464 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 11:15:43.464828 | orchestrator | 11:15:43.464 STDOUT terraform:  + updated = (known after apply) 2025-06-22 11:15:43.464890 | orchestrator | 11:15:43.464 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 11:15:43.464921 | orchestrator | 11:15:43.464 STDOUT terraform:  + block_device { 2025-06-22 11:15:43.464971 | orchestrator | 11:15:43.464 STDOUT terraform:  + boot_index = 0 2025-06-22 11:15:43.465013 | orchestrator | 11:15:43.464 STDOUT terraform:  + delete_on_termination = false 2025-06-22 11:15:43.465044 | orchestrator | 11:15:43.465 STDOUT terraform:  + destination_type = "volume" 2025-06-22 11:15:43.465076 | orchestrator | 11:15:43.465 STDOUT terraform:  + multiattach = false 2025-06-22 11:15:43.465113 | orchestrator | 11:15:43.465 STDOUT terraform:  + source_type = "volume" 2025-06-22 11:15:43.465159 | orchestrator | 11:15:43.465 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 11:15:43.465171 | orchestrator | 11:15:43.465 STDOUT terraform:  } 2025-06-22 11:15:43.465191 | orchestrator | 11:15:43.465 STDOUT terraform:  + network { 2025-06-22 11:15:43.465220 | orchestrator | 11:15:43.465 STDOUT terraform:  + access_network = false 2025-06-22 11:15:43.465257 | orchestrator | 11:15:43.465 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 11:15:43.465295 | orchestrator | 11:15:43.465 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 11:15:43.465332 | orchestrator | 11:15:43.465 STDOUT terraform:  + mac = (known after apply) 2025-06-22 11:15:43.465371 | orchestrator | 11:15:43.465 STDOUT terraform:  + name = (known after apply) 2025-06-22 11:15:43.465412 | orchestrator | 11:15:43.465 STDOUT terraform:  + port = (known after apply) 2025-06-22 11:15:43.465450 | orchestrator | 11:15:43.465 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 11:15:43.465466 | orchestrator | 11:15:43.465 STDOUT terraform:  } 2025-06-22 11:15:43.465482 | orchestrator | 11:15:43.465 STDOUT terraform:  } 2025-06-22 11:15:43.465532 | orchestrator | 11:15:43.465 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-06-22 11:15:43.465582 | orchestrator | 11:15:43.465 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 11:15:43.465623 | orchestrator | 11:15:43.465 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 11:15:43.465664 | orchestrator | 11:15:43.465 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 11:15:43.465710 | orchestrator | 11:15:43.465 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 11:15:43.465745 | orchestrator | 11:15:43.465 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.465772 | orchestrator | 11:15:43.465 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.465796 | orchestrator | 11:15:43.465 STDOUT terraform:  + config_drive = true 2025-06-22 11:15:43.465838 | orchestrator | 11:15:43.465 STDOUT terraform:  + created = (known after apply) 2025-06-22 11:15:43.465887 | orchestrator | 11:15:43.465 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 11:15:43.465912 | orchestrator | 11:15:43.465 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 11:15:43.465954 | orchestrator | 11:15:43.465 STDOUT terraform:  + force_delete = false 2025-06-22 11:15:43.465999 | orchestrator | 11:15:43.465 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 11:15:43.466067 | orchestrator | 11:15:43.465 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.466101 | orchestrator | 11:15:43.466 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 11:15:43.466141 | orchestrator | 11:15:43.466 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 11:15:43.466169 | orchestrator | 11:15:43.466 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 11:15:43.466209 | orchestrator | 11:15:43.466 STDOUT terraform:  + name = "testbed-node-4" 2025-06-22 11:15:43.466245 | orchestrator | 11:15:43.466 STDOUT terraform:  + power_state = "active" 2025-06-22 11:15:43.466281 | orchestrator | 11:15:43.466 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.466321 | orchestrator | 11:15:43.466 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 11:15:43.466348 | orchestrator | 11:15:43.466 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 11:15:43.466389 | orchestrator | 11:15:43.466 STDOUT terraform:  + updated = (known after apply) 2025-06-22 11:15:43.466448 | orchestrator | 11:15:43.466 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 11:15:43.466466 | orchestrator | 11:15:43.466 STDOUT terraform:  + block_device { 2025-06-22 11:15:43.466494 | orchestrator | 11:15:43.466 STDOUT terraform:  + boot_index = 0 2025-06-22 11:15:43.466525 | orchestrator | 11:15:43.466 STDOUT terraform:  + delete_on_termination = false 2025-06-22 11:15:43.466556 | orchestrator | 11:15:43.466 STDOUT terraform:  + destination_type = "volume" 2025-06-22 11:15:43.466588 | orchestrator | 11:15:43.466 STDOUT terraform:  + multiattach = false 2025-06-22 11:15:43.466618 | orchestrator | 11:15:43.466 STDOUT terraform:  + source_type = "volume" 2025-06-22 11:15:43.467443 | orchestrator | 11:15:43.467 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 11:15:43.467463 | orchestrator | 11:15:43.467 STDOUT terraform:  } 2025-06-22 11:15:43.467478 | orchestrator | 11:15:43.467 STDOUT terraform:  + network { 2025-06-22 11:15:43.467505 | orchestrator | 11:15:43.467 STDOUT terraform:  + access_network = false 2025-06-22 11:15:43.467538 | orchestrator | 11:15:43.467 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 11:15:43.467572 | orchestrator | 11:15:43.467 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 11:15:43.467606 | orchestrator | 11:15:43.467 STDOUT terraform:  + mac = (known after apply) 2025-06-22 11:15:43.467640 | orchestrator | 11:15:43.467 STDOUT terraform:  + name = (known after apply) 2025-06-22 11:15:43.467675 | orchestrator | 11:15:43.467 STDOUT terraform:  + port = (known after apply) 2025-06-22 11:15:43.467706 | orchestrator | 11:15:43.467 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 11:15:43.467723 | orchestrator | 11:15:43.467 STDOUT terraform:  } 2025-06-22 11:15:43.467730 | orchestrator | 11:15:43.467 STDOUT terraform:  } 2025-06-22 11:15:43.467782 | orchestrator | 11:15:43.467 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-06-22 11:15:43.467824 | orchestrator | 11:15:43.467 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 11:15:43.467863 | orchestrator | 11:15:43.467 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 11:15:43.467900 | orchestrator | 11:15:43.467 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 11:15:43.467967 | orchestrator | 11:15:43.467 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 11:15:43.468009 | orchestrator | 11:15:43.467 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.468034 | orchestrator | 11:15:43.468 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 11:15:43.468057 | orchestrator | 11:15:43.468 STDOUT terraform:  + config_drive = true 2025-06-22 11:15:43.468094 | orchestrator | 11:15:43.468 STDOUT terraform:  + created = (known after apply) 2025-06-22 11:15:43.468132 | orchestrator | 11:15:43.468 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 11:15:43.468165 | orchestrator | 11:15:43.468 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 11:15:43.468182 | orchestrator | 11:15:43.468 STDOUT terraform:  + force_delete = false 2025-06-22 11:15:43.468214 | orchestrator | 11:15:43.468 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 11:15:43.468249 | orchestrator | 11:15:43.468 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.468283 | orchestrator | 11:15:43.468 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 11:15:43.468317 | orchestrator | 11:15:43.468 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 11:15:43.468342 | orchestrator | 11:15:43.468 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 11:15:43.468372 | orchestrator | 11:15:43.468 STDOUT terraform:  + name = "testbed-node-5" 2025-06-22 11:15:43.468396 | orchestrator | 11:15:43.468 STDOUT terraform:  + power_state = "active" 2025-06-22 11:15:43.468431 | orchestrator | 11:15:43.468 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.468463 | orchestrator | 11:15:43.468 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 11:15:43.468487 | orchestrator | 11:15:43.468 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 11:15:43.468527 | orchestrator | 11:15:43.468 STDOUT terraform:  + updated = (known after apply) 2025-06-22 11:15:43.468570 | orchestrator | 11:15:43.468 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 11:15:43.468586 | orchestrator | 11:15:43.468 STDOUT terraform:  + block_device { 2025-06-22 11:15:43.468610 | orchestrator | 11:15:43.468 STDOUT terraform:  + boot_index = 0 2025-06-22 11:15:43.468637 | orchestrator | 11:15:43.468 STDOUT terraform:  + delete_on_termination = false 2025-06-22 11:15:43.468665 | orchestrator | 11:15:43.468 STDOUT terraform:  + destination_type = "volume" 2025-06-22 11:15:43.468703 | orchestrator | 11:15:43.468 STDOUT terraform:  + multiattach = false 2025-06-22 11:15:43.468721 | orchestrator | 11:15:43.468 STDOUT terraform:  + source_type = "volume" 2025-06-22 11:15:43.468759 | orchestrator | 11:15:43.468 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 11:15:43.468765 | orchestrator | 11:15:43.468 STDOUT terraform:  } 2025-06-22 11:15:43.468781 | orchestrator | 11:15:43.468 STDOUT terraform:  + network { 2025-06-22 11:15:43.468802 | orchestrator | 11:15:43.468 STDOUT terraform:  + access_network = false 2025-06-22 11:15:43.468832 | orchestrator | 11:15:43.468 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 11:15:43.468862 | orchestrator | 11:15:43.468 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 11:15:43.468893 | orchestrator | 11:15:43.468 STDOUT terraform:  + mac = (known after apply) 2025-06-22 11:15:43.468924 | orchestrator | 11:15:43.468 STDOUT terraform:  + name = (known after apply) 2025-06-22 11:15:43.468963 | orchestrator | 11:15:43.468 STDOUT terraform:  + port = (known after apply) 2025-06-22 11:15:43.468993 | orchestrator | 11:15:43.468 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 11:15:43.468999 | orchestrator | 11:15:43.468 STDOUT terraform:  } 2025-06-22 11:15:43.469015 | orchestrator | 11:15:43.468 STDOUT terraform:  } 2025-06-22 11:15:43.469048 | orchestrator | 11:15:43.469 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-06-22 11:15:43.469083 | orchestrator | 11:15:43.469 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-06-22 11:15:43.469110 | orchestrator | 11:15:43.469 STDOUT terraform:  + fingerprint = (known after apply) 2025-06-22 11:15:43.469137 | orchestrator | 11:15:43.469 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.469159 | orchestrator | 11:15:43.469 STDOUT terraform:  + name = "testbed" 2025-06-22 11:15:43.469183 | orchestrator | 11:15:43.469 STDOUT terraform:  + private_key = (sensitive value) 2025-06-22 11:15:43.469210 | orchestrator | 11:15:43.469 STDOUT terraform:  + public_key = (known after apply) 2025-06-22 11:15:43.469235 | orchestrator | 11:15:43.469 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.469265 | orchestrator | 11:15:43.469 STDOUT terraform:  + user_id = (known after apply) 2025-06-22 11:15:43.469272 | orchestrator | 11:15:43.469 STDOUT terraform:  } 2025-06-22 11:15:43.469318 | orchestrator | 11:15:43.469 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-06-22 11:15:43.469364 | orchestrator | 11:15:43.469 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 11:15:43.469393 | orchestrator | 11:15:43.469 STDOUT terraform:  + device = (known after apply) 2025-06-22 11:15:43.469423 | orchestrator | 11:15:43.469 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.469449 | orchestrator | 11:15:43.469 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 11:15:43.469477 | orchestrator | 11:15:43.469 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.469503 | orchestrator | 11:15:43.469 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 11:15:43.469510 | orchestrator | 11:15:43.469 STDOUT terraform:  } 2025-06-22 11:15:43.469561 | orchestrator | 11:15:43.469 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-06-22 11:15:43.469608 | orchestrator | 11:15:43.469 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 11:15:43.469635 | orchestrator | 11:15:43.469 STDOUT terraform:  + device = (known after apply) 2025-06-22 11:15:43.469663 | orchestrator | 11:15:43.469 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.469690 | orchestrator | 11:15:43.469 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 11:15:43.469717 | orchestrator | 11:15:43.469 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.469747 | orchestrator | 11:15:43.469 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 11:15:43.469754 | orchestrator | 11:15:43.469 STDOUT terraform:  } 2025-06-22 11:15:43.469802 | orchestrator | 11:15:43.469 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-06-22 11:15:43.469848 | orchestrator | 11:15:43.469 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 11:15:43.469914 | orchestrator | 11:15:43.469 STDOUT terraform:  + device = (known after apply) 2025-06-22 11:15:43.469954 | orchestrator | 11:15:43.469 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.469986 | orchestrator | 11:15:43.469 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 11:15:43.470024 | orchestrator | 11:15:43.469 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.470064 | orchestrator | 11:15:43.470 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 11:15:43.470122 | orchestrator | 11:15:43.470 STDOUT terraform:  } 2025-06-22 11:15:43.470128 | orchestrator | 11:15:43.470 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-06-22 11:15:43.470176 | orchestrator | 11:15:43.470 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 11:15:43.470219 | orchestrator | 11:15:43.470 STDOUT terraform:  + device = (known after apply) 2025-06-22 11:15:43.470228 | orchestrator | 11:15:43.470 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.470253 | orchestrator | 11:15:43.470 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 11:15:43.470282 | orchestrator | 11:15:43.470 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.470309 | orchestrator | 11:15:43.470 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 11:15:43.470315 | orchestrator | 11:15:43.470 STDOUT terraform:  } 2025-06-22 11:15:43.470363 | orchestrator | 11:15:43.470 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-06-22 11:15:43.470418 | orchestrator | 11:15:43.470 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 11:15:43.470424 | orchestrator | 11:15:43.470 STDOUT terraform:  + device = (known after apply) 2025-06-22 11:15:43.470474 | orchestrator | 11:15:43.470 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.470480 | orchestrator | 11:15:43.470 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 11:15:43.470519 | orchestrator | 11:15:43.470 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.470535 | orchestrator | 11:15:43.470 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 11:15:43.470541 | orchestrator | 11:15:43.470 STDOUT terraform:  } 2025-06-22 11:15:43.470597 | orchestrator | 11:15:43.470 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-06-22 11:15:43.470645 | orchestrator | 11:15:43.470 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 11:15:43.470668 | orchestrator | 11:15:43.470 STDOUT terraform:  + device = (known after apply) 2025-06-22 11:15:43.470691 | orchestrator | 11:15:43.470 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.470714 | orchestrator | 11:15:43.470 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 11:15:43.470755 | orchestrator | 11:15:43.470 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.470772 | orchestrator | 11:15:43.470 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 11:15:43.470791 | orchestrator | 11:15:43.470 STDOUT terraform:  } 2025-06-22 11:15:43.470835 | orchestrator | 11:15:43.470 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-06-22 11:15:43.470890 | orchestrator | 11:15:43.470 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 11:15:43.470896 | orchestrator | 11:15:43.470 STDOUT terraform:  + device = (known after apply) 2025-06-22 11:15:43.470967 | orchestrator | 11:15:43.470 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.470973 | orchestrator | 11:15:43.470 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 11:15:43.470978 | orchestrator | 11:15:43.470 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.471010 | orchestrator | 11:15:43.470 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 11:15:43.471017 | orchestrator | 11:15:43.471 STDOUT terraform:  } 2025-06-22 11:15:43.471066 | orchestrator | 11:15:43.471 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-06-22 11:15:43.471119 | orchestrator | 11:15:43.471 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 11:15:43.471159 | orchestrator | 11:15:43.471 STDOUT terraform:  + device = (known after apply) 2025-06-22 11:15:43.471166 | orchestrator | 11:15:43.471 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.471195 | orchestrator | 11:15:43.471 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 11:15:43.471219 | orchestrator | 11:15:43.471 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.471247 | orchestrator | 11:15:43.471 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 11:15:43.471253 | orchestrator | 11:15:43.471 STDOUT terraform:  } 2025-06-22 11:15:43.471312 | orchestrator | 11:15:43.471 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-06-22 11:15:43.471354 | orchestrator | 11:15:43.471 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 11:15:43.471384 | orchestrator | 11:15:43.471 STDOUT terraform:  + device = (known after apply) 2025-06-22 11:15:43.471400 | orchestrator | 11:15:43.471 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.471430 | orchestrator | 11:15:43.471 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 11:15:43.471455 | orchestrator | 11:15:43.471 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.471487 | orchestrator | 11:15:43.471 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 11:15:43.471493 | orchestrator | 11:15:43.471 STDOUT terraform:  } 2025-06-22 11:15:43.471550 | orchestrator | 11:15:43.471 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-06-22 11:15:43.471601 | orchestrator | 11:15:43.471 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-06-22 11:15:43.471631 | orchestrator | 11:15:43.471 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-22 11:15:43.471655 | orchestrator | 11:15:43.471 STDOUT terraform:  + floating_ip = (known after apply) 2025-06-22 11:15:43.471700 | orchestrator | 11:15:43.471 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.471709 | orchestrator | 11:15:43.471 STDOUT terraform:  + port_id = (known after apply) 2025-06-22 11:15:43.471736 | orchestrator | 11:15:43.471 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.471742 | orchestrator | 11:15:43.471 STDOUT terraform:  } 2025-06-22 11:15:43.471795 | orchestrator | 11:15:43.471 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-06-22 11:15:43.471834 | orchestrator | 11:15:43.471 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-06-22 11:15:43.471880 | orchestrator | 11:15:43.471 STDOUT terraform:  + address = (known after apply) 2025-06-22 11:15:43.471885 | orchestrator | 11:15:43.471 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.471901 | orchestrator | 11:15:43.471 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-22 11:15:43.471938 | orchestrator | 11:15:43.471 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 11:15:43.471986 | orchestrator | 11:15:43.471 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-22 11:15:43.471992 | orchestrator | 11:15:43.471 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.472017 | orchestrator | 11:15:43.471 STDOUT terraform:  + pool = "public" 2025-06-22 11:15:43.472049 | orchestrator | 11:15:43.472 STDOUT terraform:  + port_id = (known after apply) 2025-06-22 11:15:43.472066 | orchestrator | 11:15:43.472 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.472101 | orchestrator | 11:15:43.472 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 11:15:43.472112 | orchestrator | 11:15:43.472 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.472120 | orchestrator | 11:15:43.472 STDOUT terraform:  } 2025-06-22 11:15:43.472167 | orchestrator | 11:15:43.472 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-06-22 11:15:43.472689 | orchestrator | 11:15:43.472 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-06-22 11:15:43.472698 | orchestrator | 11:15:43.472 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 11:15:43.472702 | orchestrator | 11:15:43.472 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.472706 | orchestrator | 11:15:43.472 STDOUT terraform:  + availability_zone_hints = [ 2025-06-22 11:15:43.472710 | orchestrator | 11:15:43.472 STDOUT terraform:  + "nova", 2025-06-22 11:15:43.472715 | orchestrator | 11:15:43.472 STDOUT terraform:  ] 2025-06-22 11:15:43.472719 | orchestrator | 11:15:43.472 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-22 11:15:43.472723 | orchestrator | 11:15:43.472 STDOUT terraform:  + external = (known after apply) 2025-06-22 11:15:43.472727 | orchestrator | 11:15:43.472 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.472731 | orchestrator | 11:15:43.472 STDOUT terraform:  + mtu = (known after apply) 2025-06-22 11:15:43.472735 | orchestrator | 11:15:43.472 STDOUT terraform:  + name = "net-testbed-management" 2025-06-22 11:15:43.472740 | orchestrator | 11:15:43.472 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 11:15:43.472744 | orchestrator | 11:15:43.472 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 11:15:43.472750 | orchestrator | 11:15:43.472 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.472881 | orchestrator | 11:15:43.472 STDOUT terraform:  + shared = (known after apply) 2025-06-22 11:15:43.472890 | orchestrator | 11:15:43.472 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.472893 | orchestrator | 11:15:43.472 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-06-22 11:15:43.472897 | orchestrator | 11:15:43.472 STDOUT terraform:  + segments (known after apply) 2025-06-22 11:15:43.472901 | orchestrator | 11:15:43.472 STDOUT terraform:  } 2025-06-22 11:15:43.472979 | orchestrator | 11:15:43.472 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-06-22 11:15:43.472987 | orchestrator | 11:15:43.472 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-06-22 11:15:43.473026 | orchestrator | 11:15:43.472 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 11:15:43.473074 | orchestrator | 11:15:43.473 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 11:15:43.473116 | orchestrator | 11:15:43.473 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 11:15:43.473123 | orchestrator | 11:15:43.473 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.473174 | orchestrator | 11:15:43.473 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 11:15:43.473219 | orchestrator | 11:15:43.473 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 11:15:43.473242 | orchestrator | 11:15:43.473 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 11:15:43.473274 | orchestrator | 11:15:43.473 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 11:15:43.473327 | orchestrator | 11:15:43.473 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.473364 | orchestrator | 11:15:43.473 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 11:15:43.473392 | orchestrator | 11:15:43.473 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 11:15:43.473416 | orchestrator | 11:15:43.473 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 11:15:43.473455 | orchestrator | 11:15:43.473 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 11:15:43.473491 | orchestrator | 11:15:43.473 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.473532 | orchestrator | 11:15:43.473 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 11:15:43.473568 | orchestrator | 11:15:43.473 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.473590 | orchestrator | 11:15:43.473 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.473618 | orchestrator | 11:15:43.473 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 11:15:43.473634 | orchestrator | 11:15:43.473 STDOUT terraform:  } 2025-06-22 11:15:43.473650 | orchestrator | 11:15:43.473 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.473684 | orchestrator | 11:15:43.473 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 11:15:43.473690 | orchestrator | 11:15:43.473 STDOUT terraform:  } 2025-06-22 11:15:43.473718 | orchestrator | 11:15:43.473 STDOUT terraform:  + binding (known after apply) 2025-06-22 11:15:43.473724 | orchestrator | 11:15:43.473 STDOUT terraform:  + fixed_ip { 2025-06-22 11:15:43.473754 | orchestrator | 11:15:43.473 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-06-22 11:15:43.473783 | orchestrator | 11:15:43.473 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 11:15:43.473789 | orchestrator | 11:15:43.473 STDOUT terraform:  } 2025-06-22 11:15:43.473805 | orchestrator | 11:15:43.473 STDOUT terraform:  } 2025-06-22 11:15:43.473852 | orchestrator | 11:15:43.473 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-06-22 11:15:43.473897 | orchestrator | 11:15:43.473 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 11:15:43.473953 | orchestrator | 11:15:43.473 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 11:15:43.473983 | orchestrator | 11:15:43.473 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 11:15:43.474053 | orchestrator | 11:15:43.473 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 11:15:43.474076 | orchestrator | 11:15:43.474 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.474139 | orchestrator | 11:15:43.474 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 11:15:43.474149 | orchestrator | 11:15:43.474 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 11:15:43.474189 | orchestrator | 11:15:43.474 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 11:15:43.474228 | orchestrator | 11:15:43.474 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 11:15:43.474271 | orchestrator | 11:15:43.474 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.474307 | orchestrator | 11:15:43.474 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 11:15:43.474348 | orchestrator | 11:15:43.474 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 11:15:43.474373 | orchestrator | 11:15:43.474 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 11:15:43.474409 | orchestrator | 11:15:43.474 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 11:15:43.474456 | orchestrator | 11:15:43.474 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.474497 | orchestrator | 11:15:43.474 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 11:15:43.474526 | orchestrator | 11:15:43.474 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.474556 | orchestrator | 11:15:43.474 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.474563 | orchestrator | 11:15:43.474 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 11:15:43.474571 | orchestrator | 11:15:43.474 STDOUT terraform:  } 2025-06-22 11:15:43.474596 | orchestrator | 11:15:43.474 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.474630 | orchestrator | 11:15:43.474 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 11:15:43.474640 | orchestrator | 11:15:43.474 STDOUT terraform:  } 2025-06-22 11:15:43.474661 | orchestrator | 11:15:43.474 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.474693 | orchestrator | 11:15:43.474 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 11:15:43.474699 | orchestrator | 11:15:43.474 STDOUT terraform:  } 2025-06-22 11:15:43.474707 | orchestrator | 11:15:43.474 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.474750 | orchestrator | 11:15:43.474 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 11:15:43.474755 | orchestrator | 11:15:43.474 STDOUT terraform:  } 2025-06-22 11:15:43.474789 | orchestrator | 11:15:43.474 STDOUT terraform:  + binding (known after apply) 2025-06-22 11:15:43.474794 | orchestrator | 11:15:43.474 STDOUT terraform:  + fixed_ip { 2025-06-22 11:15:43.474817 | orchestrator | 11:15:43.474 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-06-22 11:15:43.474847 | orchestrator | 11:15:43.474 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 11:15:43.474854 | orchestrator | 11:15:43.474 STDOUT terraform:  } 2025-06-22 11:15:43.474870 | orchestrator | 11:15:43.474 STDOUT terraform:  } 2025-06-22 11:15:43.474925 | orchestrator | 11:15:43.474 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-06-22 11:15:43.475005 | orchestrator | 11:15:43.474 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 11:15:43.475043 | orchestrator | 11:15:43.475 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 11:15:43.475077 | orchestrator | 11:15:43.475 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 11:15:43.475117 | orchestrator | 11:15:43.475 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 11:15:43.475154 | orchestrator | 11:15:43.475 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.475189 | orchestrator | 11:15:43.475 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 11:15:43.475224 | orchestrator | 11:15:43.475 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 11:15:43.475265 | orchestrator | 11:15:43.475 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 11:15:43.475301 | orchestrator | 11:15:43.475 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 11:15:43.475353 | orchestrator | 11:15:43.475 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.475377 | orchestrator | 11:15:43.475 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 11:15:43.475418 | orchestrator | 11:15:43.475 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 11:15:43.475453 | orchestrator | 11:15:43.475 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 11:15:43.475488 | orchestrator | 11:15:43.475 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 11:15:43.475523 | orchestrator | 11:15:43.475 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.475570 | orchestrator | 11:15:43.475 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 11:15:43.475606 | orchestrator | 11:15:43.475 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.475628 | orchestrator | 11:15:43.475 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.475655 | orchestrator | 11:15:43.475 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 11:15:43.475661 | orchestrator | 11:15:43.475 STDOUT terraform:  } 2025-06-22 11:15:43.475685 | orchestrator | 11:15:43.475 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.475713 | orchestrator | 11:15:43.475 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 11:15:43.475719 | orchestrator | 11:15:43.475 STDOUT terraform:  } 2025-06-22 11:15:43.475742 | orchestrator | 11:15:43.475 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.475770 | orchestrator | 11:15:43.475 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 11:15:43.475776 | orchestrator | 11:15:43.475 STDOUT terraform:  } 2025-06-22 11:15:43.475801 | orchestrator | 11:15:43.475 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.475841 | orchestrator | 11:15:43.475 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 11:15:43.475847 | orchestrator | 11:15:43.475 STDOUT terraform:  } 2025-06-22 11:15:43.475875 | orchestrator | 11:15:43.475 STDOUT terraform:  + binding (known after apply) 2025-06-22 11:15:43.475881 | orchestrator | 11:15:43.475 STDOUT terraform:  + fixed_ip { 2025-06-22 11:15:43.475910 | orchestrator | 11:15:43.475 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-06-22 11:15:43.475963 | orchestrator | 11:15:43.475 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 11:15:43.475968 | orchestrator | 11:15:43.475 STDOUT terraform:  } 2025-06-22 11:15:43.475974 | orchestrator | 11:15:43.475 STDOUT terraform:  } 2025-06-22 11:15:43.476115 | orchestrator | 11:15:43.476 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-06-22 11:15:43.476164 | orchestrator | 11:15:43.476 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 11:15:43.476200 | orchestrator | 11:15:43.476 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 11:15:43.476235 | orchestrator | 11:15:43.476 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 11:15:43.476281 | orchestrator | 11:15:43.476 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 11:15:43.476323 | orchestrator | 11:15:43.476 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.476356 | orchestrator | 11:15:43.476 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 11:15:43.476392 | orchestrator | 11:15:43.476 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 11:15:43.483100 | orchestrator | 11:15:43.476 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 11:15:43.483172 | orchestrator | 11:15:43.476 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 11:15:43.483178 | orchestrator | 11:15:43.476 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.483183 | orchestrator | 11:15:43.476 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 11:15:43.483188 | orchestrator | 11:15:43.476 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 11:15:43.483192 | orchestrator | 11:15:43.476 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 11:15:43.483197 | orchestrator | 11:15:43.476 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 11:15:43.483201 | orchestrator | 11:15:43.476 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.483204 | orchestrator | 11:15:43.476 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 11:15:43.483208 | orchestrator | 11:15:43.476 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.483213 | orchestrator | 11:15:43.476 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.483217 | orchestrator | 11:15:43.476 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 11:15:43.483222 | orchestrator | 11:15:43.476 STDOUT terraform:  } 2025-06-22 11:15:43.483226 | orchestrator | 11:15:43.476 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.483230 | orchestrator | 11:15:43.476 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 11:15:43.483234 | orchestrator | 11:15:43.476 STDOUT terraform:  } 2025-06-22 11:15:43.483238 | orchestrator | 11:15:43.476 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.483242 | orchestrator | 11:15:43.476 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 11:15:43.483259 | orchestrator | 11:15:43.476 STDOUT terraform:  } 2025-06-22 11:15:43.483263 | orchestrator | 11:15:43.476 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.483267 | orchestrator | 11:15:43.476 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 11:15:43.483270 | orchestrator | 11:15:43.476 STDOUT terraform:  } 2025-06-22 11:15:43.483275 | orchestrator | 11:15:43.476 STDOUT terraform:  + binding (known after apply) 2025-06-22 11:15:43.483279 | orchestrator | 11:15:43.476 STDOUT terraform:  + fixed_ip { 2025-06-22 11:15:43.483284 | orchestrator | 11:15:43.476 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-06-22 11:15:43.483290 | orchestrator | 11:15:43.477 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 11:15:43.483296 | orchestrator | 11:15:43.477 STDOUT terraform:  } 2025-06-22 11:15:43.483303 | orchestrator | 11:15:43.477 STDOUT terraform:  } 2025-06-22 11:15:43.483309 | orchestrator | 11:15:43.477 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-06-22 11:15:43.483318 | orchestrator | 11:15:43.477 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 11:15:43.483324 | orchestrator | 11:15:43.477 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 11:15:43.483330 | orchestrator | 11:15:43.477 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 11:15:43.483337 | orchestrator | 11:15:43.477 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 11:15:43.483343 | orchestrator | 11:15:43.477 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.483349 | orchestrator | 11:15:43.477 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 11:15:43.483356 | orchestrator | 11:15:43.477 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 11:15:43.483363 | orchestrator | 11:15:43.477 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 11:15:43.483367 | orchestrator | 11:15:43.477 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 11:15:43.483390 | orchestrator | 11:15:43.477 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.483395 | orchestrator | 11:15:43.477 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 11:15:43.483398 | orchestrator | 11:15:43.477 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 11:15:43.483402 | orchestrator | 11:15:43.477 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 11:15:43.483406 | orchestrator | 11:15:43.477 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 11:15:43.483410 | orchestrator | 11:15:43.477 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.483413 | orchestrator | 11:15:43.477 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 11:15:43.483417 | orchestrator | 11:15:43.477 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.483421 | orchestrator | 11:15:43.477 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.483425 | orchestrator | 11:15:43.477 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 11:15:43.483433 | orchestrator | 11:15:43.477 STDOUT terraform:  } 2025-06-22 11:15:43.483437 | orchestrator | 11:15:43.477 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.483441 | orchestrator | 11:15:43.477 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 11:15:43.483445 | orchestrator | 11:15:43.477 STDOUT terraform:  } 2025-06-22 11:15:43.483448 | orchestrator | 11:15:43.477 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.483452 | orchestrator | 11:15:43.477 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 11:15:43.483456 | orchestrator | 11:15:43.477 STDOUT terraform:  } 2025-06-22 11:15:43.483460 | orchestrator | 11:15:43.477 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.483464 | orchestrator | 11:15:43.477 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 11:15:43.483467 | orchestrator | 11:15:43.477 STDOUT terraform:  } 2025-06-22 11:15:43.483471 | orchestrator | 11:15:43.477 STDOUT terraform:  + binding (known after apply) 2025-06-22 11:15:43.483475 | orchestrator | 11:15:43.477 STDOUT terraform:  + fixed_ip { 2025-06-22 11:15:43.483479 | orchestrator | 11:15:43.477 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-06-22 11:15:43.483483 | orchestrator | 11:15:43.477 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 11:15:43.483487 | orchestrator | 11:15:43.477 STDOUT terraform:  } 2025-06-22 11:15:43.483490 | orchestrator | 11:15:43.477 STDOUT terraform:  } 2025-06-22 11:15:43.483494 | orchestrator | 11:15:43.477 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-06-22 11:15:43.483498 | orchestrator | 11:15:43.477 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 11:15:43.483502 | orchestrator | 11:15:43.479 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 11:15:43.483506 | orchestrator | 11:15:43.479 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 11:15:43.483510 | orchestrator | 11:15:43.479 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 11:15:43.483513 | orchestrator | 11:15:43.479 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.483517 | orchestrator | 11:15:43.479 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 11:15:43.483521 | orchestrator | 11:15:43.479 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 11:15:43.483525 | orchestrator | 11:15:43.479 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 11:15:43.483529 | orchestrator | 11:15:43.479 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 11:15:43.483532 | orchestrator | 11:15:43.479 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.483536 | orchestrator | 11:15:43.479 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 11:15:43.483540 | orchestrator | 11:15:43.479 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 11:15:43.483547 | orchestrator | 11:15:43.479 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 11:15:43.483555 | orchestrator | 11:15:43.479 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 11:15:43.483559 | orchestrator | 11:15:43.479 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.483563 | orchestrator | 11:15:43.479 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 11:15:43.483568 | orchestrator | 11:15:43.479 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.483571 | orchestrator | 11:15:43.479 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.483575 | orchestrator | 11:15:43.479 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 11:15:43.483579 | orchestrator | 11:15:43.479 STDOUT terraform:  } 2025-06-22 11:15:43.483583 | orchestrator | 11:15:43.479 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.483587 | orchestrator | 11:15:43.479 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 11:15:43.483590 | orchestrator | 11:15:43.479 STDOUT terraform:  } 2025-06-22 11:15:43.483594 | orchestrator | 11:15:43.479 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.483598 | orchestrator | 11:15:43.479 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 11:15:43.483602 | orchestrator | 11:15:43.479 STDOUT terraform:  } 2025-06-22 11:15:43.483605 | orchestrator | 11:15:43.479 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.483609 | orchestrator | 11:15:43.479 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 11:15:43.483613 | orchestrator | 11:15:43.479 STDOUT terraform:  } 2025-06-22 11:15:43.483617 | orchestrator | 11:15:43.479 STDOUT terraform:  + binding (known after apply) 2025-06-22 11:15:43.483621 | orchestrator | 11:15:43.479 STDOUT terraform:  + fixed_ip { 2025-06-22 11:15:43.483625 | orchestrator | 11:15:43.479 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-06-22 11:15:43.483628 | orchestrator | 11:15:43.479 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 11:15:43.483632 | orchestrator | 11:15:43.479 STDOUT terraform:  } 2025-06-22 11:15:43.483636 | orchestrator | 11:15:43.479 STDOUT terraform:  } 2025-06-22 11:15:43.483640 | orchestrator | 11:15:43.479 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-06-22 11:15:43.483644 | orchestrator | 11:15:43.479 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 11:15:43.483648 | orchestrator | 11:15:43.479 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 11:15:43.483651 | orchestrator | 11:15:43.479 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 11:15:43.483655 | orchestrator | 11:15:43.479 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 11:15:43.483659 | orchestrator | 11:15:43.480 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.483663 | orchestrator | 11:15:43.480 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 11:15:43.483666 | orchestrator | 11:15:43.480 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 11:15:43.483670 | orchestrator | 11:15:43.480 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 11:15:43.483681 | orchestrator | 11:15:43.480 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 11:15:43.483685 | orchestrator | 11:15:43.480 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.483689 | orchestrator | 11:15:43.480 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 11:15:43.483693 | orchestrator | 11:15:43.480 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 11:15:43.483697 | orchestrator | 11:15:43.480 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 11:15:43.483700 | orchestrator | 11:15:43.480 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 11:15:43.483711 | orchestrator | 11:15:43.480 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.483716 | orchestrator | 11:15:43.480 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 11:15:43.483720 | orchestrator | 11:15:43.480 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.483723 | orchestrator | 11:15:43.480 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.483727 | orchestrator | 11:15:43.480 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 11:15:43.483731 | orchestrator | 11:15:43.480 STDOUT terraform:  } 2025-06-22 11:15:43.483735 | orchestrator | 11:15:43.480 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.483739 | orchestrator | 11:15:43.480 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 11:15:43.483742 | orchestrator | 11:15:43.480 STDOUT terraform:  } 2025-06-22 11:15:43.483746 | orchestrator | 11:15:43.480 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.483750 | orchestrator | 11:15:43.480 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 11:15:43.483754 | orchestrator | 11:15:43.480 STDOUT terraform:  } 2025-06-22 11:15:43.483757 | orchestrator | 11:15:43.480 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 11:15:43.483761 | orchestrator | 11:15:43.480 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 11:15:43.483765 | orchestrator | 11:15:43.480 STDOUT terraform:  } 2025-06-22 11:15:43.483769 | orchestrator | 11:15:43.480 STDOUT terraform:  + binding (known after apply) 2025-06-22 11:15:43.483773 | orchestrator | 11:15:43.480 STDOUT terraform:  + fixed_ip { 2025-06-22 11:15:43.483776 | orchestrator | 11:15:43.480 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-06-22 11:15:43.483780 | orchestrator | 11:15:43.480 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 11:15:43.483784 | orchestrator | 11:15:43.480 STDOUT terraform:  } 2025-06-22 11:15:43.483788 | orchestrator | 11:15:43.480 STDOUT terraform:  } 2025-06-22 11:15:43.483791 | orchestrator | 11:15:43.480 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-06-22 11:15:43.483795 | orchestrator | 11:15:43.480 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-06-22 11:15:43.483799 | orchestrator | 11:15:43.480 STDOUT terraform:  + force_destroy = false 2025-06-22 11:15:43.483803 | orchestrator | 11:15:43.480 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.483810 | orchestrator | 11:15:43.480 STDOUT terraform:  + port_id = (known after apply) 2025-06-22 11:15:43.483814 | orchestrator | 11:15:43.480 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.483850 | orchestrator | 11:15:43.480 STDOUT terraform:  + router_id = (known after apply) 2025-06-22 11:15:43.483854 | orchestrator | 11:15:43.480 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 11:15:43.483857 | orchestrator | 11:15:43.480 STDOUT terraform:  } 2025-06-22 11:15:43.483861 | orchestrator | 11:15:43.480 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-06-22 11:15:43.483865 | orchestrator | 11:15:43.480 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-06-22 11:15:43.483869 | orchestrator | 11:15:43.481 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 11:15:43.483873 | orchestrator | 11:15:43.481 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.483877 | orchestrator | 11:15:43.481 STDOUT terraform:  + availability_zone_hints = [ 2025-06-22 11:15:43.483883 | orchestrator | 11:15:43.481 STDOUT terraform:  + "nova", 2025-06-22 11:15:43.483887 | orchestrator | 11:15:43.481 STDOUT terraform:  ] 2025-06-22 11:15:43.483891 | orchestrator | 11:15:43.481 STDOUT terraform:  + distributed = (known after apply) 2025-06-22 11:15:43.483895 | orchestrator | 11:15:43.481 STDOUT terraform:  + enable_snat = (known after apply) 2025-06-22 11:15:43.483898 | orchestrator | 11:15:43.481 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-06-22 11:15:43.483916 | orchestrator | 11:15:43.481 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-06-22 11:15:43.483920 | orchestrator | 11:15:43.481 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.483924 | orchestrator | 11:15:43.481 STDOUT terraform:  + name = "testbed" 2025-06-22 11:15:43.483928 | orchestrator | 11:15:43.481 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.483962 | orchestrator | 11:15:43.481 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.483966 | orchestrator | 11:15:43.481 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-06-22 11:15:43.483970 | orchestrator | 11:15:43.481 STDOUT terraform:  } 2025-06-22 11:15:43.483974 | orchestrator | 11:15:43.481 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-06-22 11:15:43.483980 | orchestrator | 11:15:43.481 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-06-22 11:15:43.483984 | orchestrator | 11:15:43.481 STDOUT terraform:  + description = "ssh" 2025-06-22 11:15:43.483988 | orchestrator | 11:15:43.481 STDOUT terraform:  + direction = "ingress" 2025-06-22 11:15:43.483991 | orchestrator | 11:15:43.481 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 11:15:43.483995 | orchestrator | 11:15:43.481 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.483999 | orchestrator | 11:15:43.481 STDOUT terraform:  + port_range_max = 22 2025-06-22 11:15:43.484003 | orchestrator | 11:15:43.481 STDOUT terraform:  + port_range_min = 22 2025-06-22 11:15:43.484011 | orchestrator | 11:15:43.481 STDOUT terraform:  + protocol = "tcp" 2025-06-22 11:15:43.484015 | orchestrator | 11:15:43.481 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.484018 | orchestrator | 11:15:43.481 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 11:15:43.484022 | orchestrator | 11:15:43.481 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 11:15:43.484026 | orchestrator | 11:15:43.481 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 11:15:43.484030 | orchestrator | 11:15:43.481 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 11:15:43.484033 | orchestrator | 11:15:43.481 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.484037 | orchestrator | 11:15:43.481 STDOUT terraform:  } 2025-06-22 11:15:43.484041 | orchestrator | 11:15:43.481 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-06-22 11:15:43.484045 | orchestrator | 11:15:43.481 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-06-22 11:15:43.484049 | orchestrator | 11:15:43.481 STDOUT terraform:  + description = "wireguard" 2025-06-22 11:15:43.484052 | orchestrator | 11:15:43.481 STDOUT terraform:  + direction = "ingress" 2025-06-22 11:15:43.484056 | orchestrator | 11:15:43.482 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 11:15:43.484060 | orchestrator | 11:15:43.482 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.484064 | orchestrator | 11:15:43.482 STDOUT terraform:  + port_range_max = 51820 2025-06-22 11:15:43.484067 | orchestrator | 11:15:43.482 STDOUT terraform:  + port_range_min = 51820 2025-06-22 11:15:43.484071 | orchestrator | 11:15:43.482 STDOUT terraform:  + protocol = "udp" 2025-06-22 11:15:43.484075 | orchestrator | 11:15:43.482 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.484079 | orchestrator | 11:15:43.482 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 11:15:43.484083 | orchestrator | 11:15:43.482 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 11:15:43.484086 | orchestrator | 11:15:43.482 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 11:15:43.484097 | orchestrator | 11:15:43.482 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 11:15:43.484101 | orchestrator | 11:15:43.482 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.484105 | orchestrator | 11:15:43.482 STDOUT terraform:  } 2025-06-22 11:15:43.484109 | orchestrator | 11:15:43.482 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-06-22 11:15:43.484112 | orchestrator | 11:15:43.482 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-06-22 11:15:43.484116 | orchestrator | 11:15:43.482 STDOUT terraform:  + direction = "ingress" 2025-06-22 11:15:43.484120 | orchestrator | 11:15:43.482 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 11:15:43.484127 | orchestrator | 11:15:43.482 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.484131 | orchestrator | 11:15:43.482 STDOUT terraform:  + protocol = "tcp" 2025-06-22 11:15:43.484135 | orchestrator | 11:15:43.482 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.484139 | orchestrator | 11:15:43.482 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 11:15:43.484142 | orchestrator | 11:15:43.482 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 11:15:43.484146 | orchestrator | 11:15:43.482 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-22 11:15:43.484150 | orchestrator | 11:15:43.482 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 11:15:43.484154 | orchestrator | 11:15:43.482 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.484157 | orchestrator | 11:15:43.482 STDOUT terraform:  } 2025-06-22 11:15:43.484161 | orchestrator | 11:15:43.482 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-06-22 11:15:43.484165 | orchestrator | 11:15:43.482 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-06-22 11:15:43.484169 | orchestrator | 11:15:43.482 STDOUT terraform:  + direction = "ingress" 2025-06-22 11:15:43.484172 | orchestrator | 11:15:43.482 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 11:15:43.484176 | orchestrator | 11:15:43.482 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.484180 | orchestrator | 11:15:43.482 STDOUT terraform:  + protocol = "udp" 2025-06-22 11:15:43.484184 | orchestrator | 11:15:43.482 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.484187 | orchestrator | 11:15:43.483 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 11:15:43.484191 | orchestrator | 11:15:43.483 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 11:15:43.484195 | orchestrator | 11:15:43.483 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-22 11:15:43.484199 | orchestrator | 11:15:43.483 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 11:15:43.484202 | orchestrator | 11:15:43.483 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.484206 | orchestrator | 11:15:43.483 STDOUT terraform:  } 2025-06-22 11:15:43.484210 | orchestrator | 11:15:43.483 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-06-22 11:15:43.484214 | orchestrator | 11:15:43.483 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-06-22 11:15:43.484218 | orchestrator | 11:15:43.483 STDOUT terraform:  + direction = "ingress" 2025-06-22 11:15:43.484221 | orchestrator | 11:15:43.483 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 11:15:43.484225 | orchestrator | 11:15:43.483 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.484229 | orchestrator | 11:15:43.483 STDOUT terraform:  + protocol = "icmp" 2025-06-22 11:15:43.484240 | orchestrator | 11:15:43.483 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.484244 | orchestrator | 11:15:43.483 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 11:15:43.484247 | orchestrator | 11:15:43.483 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 11:15:43.484251 | orchestrator | 11:15:43.483 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 11:15:43.484277 | orchestrator | 11:15:43.483 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 11:15:43.484281 | orchestrator | 11:15:43.483 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.484285 | orchestrator | 11:15:43.483 STDOUT terraform:  } 2025-06-22 11:15:43.484289 | orchestrator | 11:15:43.483 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-06-22 11:15:43.484293 | orchestrator | 11:15:43.483 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-06-22 11:15:43.484297 | orchestrator | 11:15:43.483 STDOUT terraform:  + direction = "ingress" 2025-06-22 11:15:43.484301 | orchestrator | 11:15:43.483 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 11:15:43.484305 | orchestrator | 11:15:43.483 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.484309 | orchestrator | 11:15:43.483 STDOUT terraform:  + protocol = "tcp" 2025-06-22 11:15:43.484313 | orchestrator | 11:15:43.483 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.484316 | orchestrator | 11:15:43.483 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 11:15:43.484320 | orchestrator | 11:15:43.483 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 11:15:43.484324 | orchestrator | 11:15:43.483 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 11:15:43.484328 | orchestrator | 11:15:43.483 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 11:15:43.484331 | orchestrator | 11:15:43.484 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.484335 | orchestrator | 11:15:43.484 STDOUT terraform:  } 2025-06-22 11:15:43.484340 | orchestrator | 11:15:43.484 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-06-22 11:15:43.484344 | orchestrator | 11:15:43.484 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-06-22 11:15:43.484347 | orchestrator | 11:15:43.484 STDOUT terraform:  + direction = "ingress" 2025-06-22 11:15:43.484351 | orchestrator | 11:15:43.484 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 11:15:43.484355 | orchestrator | 11:15:43.484 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.484359 | orchestrator | 11:15:43.484 STDOUT terraform:  + protocol = "udp" 2025-06-22 11:15:43.484365 | orchestrator | 11:15:43.484 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.484369 | orchestrator | 11:15:43.484 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 11:15:43.484372 | orchestrator | 11:15:43.484 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 11:15:43.484381 | orchestrator | 11:15:43.484 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 11:15:43.484387 | orchestrator | 11:15:43.484 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 11:15:43.484430 | orchestrator | 11:15:43.484 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.484435 | orchestrator | 11:15:43.484 STDOUT terraform:  } 2025-06-22 11:15:43.484489 | orchestrator | 11:15:43.484 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-06-22 11:15:43.484539 | orchestrator | 11:15:43.484 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-06-22 11:15:43.484568 | orchestrator | 11:15:43.484 STDOUT terraform:  + direction = "ingress" 2025-06-22 11:15:43.484597 | orchestrator | 11:15:43.484 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 11:15:43.484632 | orchestrator | 11:15:43.484 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.484660 | orchestrator | 11:15:43.484 STDOUT terraform:  + protocol = "icmp" 2025-06-22 11:15:43.484696 | orchestrator | 11:15:43.484 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.484732 | orchestrator | 11:15:43.484 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 11:15:43.484770 | orchestrator | 11:15:43.484 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 11:15:43.484800 | orchestrator | 11:15:43.484 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 11:15:43.484836 | orchestrator | 11:15:43.484 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 11:15:43.484873 | orchestrator | 11:15:43.484 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.484880 | orchestrator | 11:15:43.484 STDOUT terraform:  } 2025-06-22 11:15:43.484941 | orchestrator | 11:15:43.484 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-06-22 11:15:43.484991 | orchestrator | 11:15:43.484 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-06-22 11:15:43.485017 | orchestrator | 11:15:43.484 STDOUT terraform:  + description = "vrrp" 2025-06-22 11:15:43.485047 | orchestrator | 11:15:43.485 STDOUT terraform:  + direction = "ingress" 2025-06-22 11:15:43.485075 | orchestrator | 11:15:43.485 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 11:15:43.485112 | orchestrator | 11:15:43.485 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.485138 | orchestrator | 11:15:43.485 STDOUT terraform:  + protocol = "112" 2025-06-22 11:15:43.485176 | orchestrator | 11:15:43.485 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.485211 | orchestrator | 11:15:43.485 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 11:15:43.485247 | orchestrator | 11:15:43.485 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 11:15:43.485278 | orchestrator | 11:15:43.485 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 11:15:43.485315 | orchestrator | 11:15:43.485 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 11:15:43.485356 | orchestrator | 11:15:43.485 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.485361 | orchestrator | 11:15:43.485 STDOUT terraform:  } 2025-06-22 11:15:43.485410 | orchestrator | 11:15:43.485 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-06-22 11:15:43.485459 | orchestrator | 11:15:43.485 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-06-22 11:15:43.485488 | orchestrator | 11:15:43.485 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.485524 | orchestrator | 11:15:43.485 STDOUT terraform:  + description = "management security group" 2025-06-22 11:15:43.485553 | orchestrator | 11:15:43.485 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.485582 | orchestrator | 11:15:43.485 STDOUT terraform:  + name = "testbed-management" 2025-06-22 11:15:43.485614 | orchestrator | 11:15:43.485 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.485642 | orchestrator | 11:15:43.485 STDOUT terraform:  + stateful = (known after apply) 2025-06-22 11:15:43.485670 | orchestrator | 11:15:43.485 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.485678 | orchestrator | 11:15:43.485 STDOUT terraform:  } 2025-06-22 11:15:43.485724 | orchestrator | 11:15:43.485 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-06-22 11:15:43.485773 | orchestrator | 11:15:43.485 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-06-22 11:15:43.485805 | orchestrator | 11:15:43.485 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.485832 | orchestrator | 11:15:43.485 STDOUT terraform:  + description = "node security group" 2025-06-22 11:15:43.485861 | orchestrator | 11:15:43.485 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.485887 | orchestrator | 11:15:43.485 STDOUT terraform:  + name = "testbed-node" 2025-06-22 11:15:43.485916 | orchestrator | 11:15:43.485 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.486048 | orchestrator | 11:15:43.485 STDOUT terraform:  + stateful = (known after apply) 2025-06-22 11:15:43.486074 | orchestrator | 11:15:43.486 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.486081 | orchestrator | 11:15:43.486 STDOUT terraform:  } 2025-06-22 11:15:43.486252 | orchestrator | 11:15:43.486 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-06-22 11:15:43.486261 | orchestrator | 11:15:43.486 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-06-22 11:15:43.486265 | orchestrator | 11:15:43.486 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 11:15:43.486269 | orchestrator | 11:15:43.486 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-06-22 11:15:43.486273 | orchestrator | 11:15:43.486 STDOUT terraform:  + dns_nameservers = [ 2025-06-22 11:15:43.486279 | orchestrator | 11:15:43.486 STDOUT terraform:  + "8.8.8.8", 2025-06-22 11:15:43.486283 | orchestrator | 11:15:43.486 STDOUT terraform:  + "9.9.9.9", 2025-06-22 11:15:43.486294 | orchestrator | 11:15:43.486 STDOUT terraform:  ] 2025-06-22 11:15:43.486300 | orchestrator | 11:15:43.486 STDOUT terraform:  + enable_dhcp = true 2025-06-22 11:15:43.486329 | orchestrator | 11:15:43.486 STDOUT terraform:  + gateway_ip = (known after apply) 2025-06-22 11:15:43.486361 | orchestrator | 11:15:43.486 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.486382 | orchestrator | 11:15:43.486 STDOUT terraform:  + ip_version = 4 2025-06-22 11:15:43.486416 | orchestrator | 11:15:43.486 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-06-22 11:15:43.486447 | orchestrator | 11:15:43.486 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-06-22 11:15:43.486487 | orchestrator | 11:15:43.486 STDOUT terraform:  + name = "subnet-testbed-management" 2025-06-22 11:15:43.486518 | orchestrator | 11:15:43.486 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 11:15:43.486539 | orchestrator | 11:15:43.486 STDOUT terraform:  + no_gateway = false 2025-06-22 11:15:43.486573 | orchestrator | 11:15:43.486 STDOUT terraform:  + region = (known after apply) 2025-06-22 11:15:43.486603 | orchestrator | 11:15:43.486 STDOUT terraform:  + service_types = (known after apply) 2025-06-22 11:15:43.486632 | orchestrator | 11:15:43.486 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 11:15:43.486655 | orchestrator | 11:15:43.486 STDOUT terraform:  + allocation_pool { 2025-06-22 11:15:43.486679 | orchestrator | 11:15:43.486 STDOUT terraform:  + end = "192.168.31.250" 2025-06-22 11:15:43.486705 | orchestrator | 11:15:43.486 STDOUT terraform:  + start = "192.168.31.200" 2025-06-22 11:15:43.486727 | orchestrator | 11:15:43.486 STDOUT terraform:  } 2025-06-22 11:15:43.486748 | orchestrator | 11:15:43.486 STDOUT terraform:  } 2025-06-22 11:15:43.486777 | orchestrator | 11:15:43.486 STDOUT terraform:  # terraform_data.image will be created 2025-06-22 11:15:43.486803 | orchestrator | 11:15:43.486 STDOUT terraform:  + resource "terraform_data" "image" { 2025-06-22 11:15:43.486828 | orchestrator | 11:15:43.486 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.486851 | orchestrator | 11:15:43.486 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-22 11:15:43.486877 | orchestrator | 11:15:43.486 STDOUT terraform:  + output = (known after apply) 2025-06-22 11:15:43.486885 | orchestrator | 11:15:43.486 STDOUT terraform:  } 2025-06-22 11:15:43.486918 | orchestrator | 11:15:43.486 STDOUT terraform:  # terraform_data.image_node will be created 2025-06-22 11:15:43.486962 | orchestrator | 11:15:43.486 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-06-22 11:15:43.486988 | orchestrator | 11:15:43.486 STDOUT terraform:  + id = (known after apply) 2025-06-22 11:15:43.487012 | orchestrator | 11:15:43.486 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-22 11:15:43.487036 | orchestrator | 11:15:43.487 STDOUT terraform:  + output = (known after apply) 2025-06-22 11:15:43.487065 | orchestrator | 11:15:43.487 STDOUT terraform:  } 2025-06-22 11:15:43.487098 | orchestrator | 11:15:43.487 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-06-22 11:15:43.487105 | orchestrator | 11:15:43.487 STDOUT terraform: Changes to Outputs: 2025-06-22 11:15:43.487136 | orchestrator | 11:15:43.487 STDOUT terraform:  + manager_address = (sensitive value) 2025-06-22 11:15:43.487163 | orchestrator | 11:15:43.487 STDOUT terraform:  + private_key = (sensitive value) 2025-06-22 11:15:43.680757 | orchestrator | 11:15:43.680 STDOUT terraform: terraform_data.image: Creating... 2025-06-22 11:15:43.681196 | orchestrator | 11:15:43.681 STDOUT terraform: terraform_data.image_node: Creating... 2025-06-22 11:15:43.681682 | orchestrator | 11:15:43.681 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=0fc36346-95b1-9f28-73d8-04687c80eb99] 2025-06-22 11:15:44.204028 | orchestrator | 11:15:44.201 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=3443b6e3-d5f5-62a9-1821-477028294299] 2025-06-22 11:15:44.217357 | orchestrator | 11:15:44.217 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-06-22 11:15:44.217641 | orchestrator | 11:15:44.217 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-06-22 11:15:44.226247 | orchestrator | 11:15:44.226 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-06-22 11:15:44.226588 | orchestrator | 11:15:44.226 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-06-22 11:15:44.227284 | orchestrator | 11:15:44.227 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-06-22 11:15:44.229082 | orchestrator | 11:15:44.228 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-06-22 11:15:44.231819 | orchestrator | 11:15:44.231 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-06-22 11:15:44.234657 | orchestrator | 11:15:44.234 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-06-22 11:15:44.236479 | orchestrator | 11:15:44.236 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-06-22 11:15:44.237006 | orchestrator | 11:15:44.236 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-06-22 11:15:44.773336 | orchestrator | 11:15:44.772 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-22 11:15:44.786653 | orchestrator | 11:15:44.786 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-06-22 11:15:44.803625 | orchestrator | 11:15:44.803 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-22 11:15:44.810223 | orchestrator | 11:15:44.810 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-06-22 11:15:44.889273 | orchestrator | 11:15:44.888 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-06-22 11:15:44.896400 | orchestrator | 11:15:44.896 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-06-22 11:15:50.348811 | orchestrator | 11:15:50.348 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=aae46bad-b912-4bd2-92ef-246047ea0043] 2025-06-22 11:15:50.355566 | orchestrator | 11:15:50.355 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-06-22 11:15:54.227551 | orchestrator | 11:15:54.227 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-06-22 11:15:54.228681 | orchestrator | 11:15:54.228 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-06-22 11:15:54.230895 | orchestrator | 11:15:54.230 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-06-22 11:15:54.233144 | orchestrator | 11:15:54.232 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-06-22 11:15:54.237496 | orchestrator | 11:15:54.237 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-06-22 11:15:54.238749 | orchestrator | 11:15:54.238 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-06-22 11:15:54.787596 | orchestrator | 11:15:54.787 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-06-22 11:15:54.810825 | orchestrator | 11:15:54.810 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-06-22 11:15:54.898101 | orchestrator | 11:15:54.897 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-06-22 11:15:54.912510 | orchestrator | 11:15:54.912 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=a129606c-fab1-48ed-9350-9d2eafddbd52] 2025-06-22 11:15:54.923457 | orchestrator | 11:15:54.923 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-06-22 11:15:54.951782 | orchestrator | 11:15:54.951 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=a273c01c-52c4-42f8-a181-d91a87ff3a5e] 2025-06-22 11:15:54.963381 | orchestrator | 11:15:54.963 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-06-22 11:15:54.972414 | orchestrator | 11:15:54.972 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=0234f42c-6d02-44b8-b796-e801f7c6659f] 2025-06-22 11:15:54.979002 | orchestrator | 11:15:54.978 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-06-22 11:15:54.995506 | orchestrator | 11:15:54.995 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=899f0377-b87c-421a-9d44-3bd393f5c125] 2025-06-22 11:15:55.001234 | orchestrator | 11:15:55.001 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-06-22 11:15:55.008145 | orchestrator | 11:15:55.007 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=7610229b-d7bf-450f-9964-1d42e936a357] 2025-06-22 11:15:55.016467 | orchestrator | 11:15:55.016 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-06-22 11:15:55.028983 | orchestrator | 11:15:55.028 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=c288123e-75d1-4d08-8561-55f7fbbd7c1b] 2025-06-22 11:15:55.038533 | orchestrator | 11:15:55.038 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-06-22 11:15:55.049567 | orchestrator | 11:15:55.049 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=4b47f8cd-db2a-4bea-898d-3d48c49a84c2] 2025-06-22 11:15:55.061503 | orchestrator | 11:15:55.061 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=060f7999-6812-4095-99a7-aa228581a5cf] 2025-06-22 11:15:55.070056 | orchestrator | 11:15:55.069 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-06-22 11:15:55.075135 | orchestrator | 11:15:55.075 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-06-22 11:15:55.076273 | orchestrator | 11:15:55.076 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=a02d50fe72ab308122ddf2269c84342e79a6d7e7] 2025-06-22 11:15:55.085837 | orchestrator | 11:15:55.085 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=dbf2234af879f7b7272c63b3f41b9455070d6c1e] 2025-06-22 11:15:55.090342 | orchestrator | 11:15:55.090 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-06-22 11:15:55.093270 | orchestrator | 11:15:55.092 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=95ca9be4-ae4c-4603-a11a-c98b5f55b273] 2025-06-22 11:16:00.356648 | orchestrator | 11:16:00.356 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-06-22 11:16:00.666589 | orchestrator | 11:16:00.666 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=411f1594-a25d-467f-afef-a642d1c14efc] 2025-06-22 11:16:00.998754 | orchestrator | 11:16:00.998 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=dd45b093-fb8c-47fb-b2a8-65b9adab201d] 2025-06-22 11:16:01.007464 | orchestrator | 11:16:01.007 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-06-22 11:16:04.924851 | orchestrator | 11:16:04.924 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-06-22 11:16:04.963962 | orchestrator | 11:16:04.963 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-06-22 11:16:04.980234 | orchestrator | 11:16:04.980 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-06-22 11:16:05.002737 | orchestrator | 11:16:05.002 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-06-22 11:16:05.016030 | orchestrator | 11:16:05.015 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-06-22 11:16:05.039387 | orchestrator | 11:16:05.039 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-06-22 11:16:05.318423 | orchestrator | 11:16:05.318 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=f4c4cccf-6106-481f-a690-70f34a54183b] 2025-06-22 11:16:05.325313 | orchestrator | 11:16:05.325 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=78f6eb13-c64f-4a4d-8d42-a1e1157c4033] 2025-06-22 11:16:05.393218 | orchestrator | 11:16:05.392 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=7ea9cfe9-e584-4538-969c-cb61cccf4b41] 2025-06-22 11:16:05.426621 | orchestrator | 11:16:05.426 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=9da8c542-f304-48e2-b337-ad2903d45683] 2025-06-22 11:16:05.438663 | orchestrator | 11:16:05.438 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=6a2610a1-2e6a-4331-b268-14d7657bafb2] 2025-06-22 11:16:05.444198 | orchestrator | 11:16:05.443 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=939cb3b2-f470-4f15-9cd5-5f32e96d8a48] 2025-06-22 11:16:09.203492 | orchestrator | 11:16:09.203 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=b107c942-2a30-421a-9063-b8d6fed3da23] 2025-06-22 11:16:09.214061 | orchestrator | 11:16:09.213 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-06-22 11:16:09.217772 | orchestrator | 11:16:09.217 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-06-22 11:16:09.217823 | orchestrator | 11:16:09.217 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-06-22 11:16:09.437518 | orchestrator | 11:16:09.437 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=45617d19-759a-44d0-84c9-40f8bc8c29c0] 2025-06-22 11:16:09.448149 | orchestrator | 11:16:09.447 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-06-22 11:16:09.448906 | orchestrator | 11:16:09.448 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-06-22 11:16:09.450076 | orchestrator | 11:16:09.449 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-06-22 11:16:09.450164 | orchestrator | 11:16:09.450 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-06-22 11:16:09.452606 | orchestrator | 11:16:09.452 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-06-22 11:16:09.464629 | orchestrator | 11:16:09.464 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-06-22 11:16:09.464707 | orchestrator | 11:16:09.464 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-06-22 11:16:09.464948 | orchestrator | 11:16:09.464 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-06-22 11:16:09.480229 | orchestrator | 11:16:09.480 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=a0f60ed7-1afa-4be8-982f-dfda6d2a99dd] 2025-06-22 11:16:09.491064 | orchestrator | 11:16:09.490 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-06-22 11:16:09.636329 | orchestrator | 11:16:09.635 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=6ca2b252-be83-4d95-953c-198f23388552] 2025-06-22 11:16:09.649002 | orchestrator | 11:16:09.648 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-06-22 11:16:09.822855 | orchestrator | 11:16:09.822 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=0d8e4b3d-b9c3-4da1-b6d7-c9c65ca52147] 2025-06-22 11:16:09.834567 | orchestrator | 11:16:09.834 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-06-22 11:16:10.023531 | orchestrator | 11:16:10.023 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=99be3bd1-6e88-4bb5-95cc-f88cd5d86da4] 2025-06-22 11:16:10.031591 | orchestrator | 11:16:10.031 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-06-22 11:16:10.205223 | orchestrator | 11:16:10.204 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=57975bb6-539e-44a2-855c-33331c688055] 2025-06-22 11:16:10.212410 | orchestrator | 11:16:10.212 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-06-22 11:16:10.535779 | orchestrator | 11:16:10.535 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=f0f02fb0-1eb4-4000-9e83-cf5ac80f12ae] 2025-06-22 11:16:10.548651 | orchestrator | 11:16:10.548 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-06-22 11:16:10.716084 | orchestrator | 11:16:10.715 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=a21bc5cb-e85c-4836-bb42-dca3be52165c] 2025-06-22 11:16:10.723915 | orchestrator | 11:16:10.723 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-06-22 11:16:10.885800 | orchestrator | 11:16:10.885 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=289a73c5-205a-450a-a20e-408c3a501b94] 2025-06-22 11:16:10.894748 | orchestrator | 11:16:10.894 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-06-22 11:16:11.066813 | orchestrator | 11:16:11.066 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=f8967c01-abe7-4e77-afea-d041224804b2] 2025-06-22 11:16:11.289366 | orchestrator | 11:16:11.288 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=f2bbc09b-d198-4843-9bde-a87cfdf328ad] 2025-06-22 11:16:15.141728 | orchestrator | 11:16:15.141 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=ab4c8ff7-e7bc-4fe5-ba4b-4f4476934dc9] 2025-06-22 11:16:15.162650 | orchestrator | 11:16:15.162 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=fb8cca32-ab91-4a8d-b275-cf144fd2d0c5] 2025-06-22 11:16:15.230107 | orchestrator | 11:16:15.229 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=ee064403-4b62-4be7-8587-c7ea3e364c27] 2025-06-22 11:16:15.653016 | orchestrator | 11:16:15.652 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 7s [id=aa36239b-3abb-4b5c-8166-1697c89ff265] 2025-06-22 11:16:15.752908 | orchestrator | 11:16:15.752 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 7s [id=a90c8a92-b4d8-4c5f-8083-358971d7744a] 2025-06-22 11:16:15.876721 | orchestrator | 11:16:15.876 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=c3a643b9-fa08-4473-a770-a4242e50d5b8] 2025-06-22 11:16:16.085163 | orchestrator | 11:16:16.084 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 5s [id=36c3f7b1-8bf6-4536-af4f-2534bb9b3d4a] 2025-06-22 11:16:16.882284 | orchestrator | 11:16:16.881 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=c8456cf3-57f4-463c-9369-50f264947b8c] 2025-06-22 11:16:16.903138 | orchestrator | 11:16:16.902 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-06-22 11:16:16.912433 | orchestrator | 11:16:16.912 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-06-22 11:16:16.925100 | orchestrator | 11:16:16.924 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-06-22 11:16:16.928537 | orchestrator | 11:16:16.928 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-06-22 11:16:16.935141 | orchestrator | 11:16:16.935 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-06-22 11:16:16.937657 | orchestrator | 11:16:16.937 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-06-22 11:16:16.938148 | orchestrator | 11:16:16.938 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-06-22 11:16:23.374089 | orchestrator | 11:16:23.373 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=0d104e37-7757-42c5-a797-a6f335fe5dac] 2025-06-22 11:16:23.384674 | orchestrator | 11:16:23.384 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-06-22 11:16:23.394871 | orchestrator | 11:16:23.394 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-06-22 11:16:23.398440 | orchestrator | 11:16:23.398 STDOUT terraform: local_file.inventory: Creating... 2025-06-22 11:16:23.401851 | orchestrator | 11:16:23.401 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=ad0d59026aca8929d2460c27d53ec28e0f4a1b90] 2025-06-22 11:16:23.405907 | orchestrator | 11:16:23.405 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=6a6c7cfa74b4c47643f42f473767271e8feb7fe6] 2025-06-22 11:16:24.243265 | orchestrator | 11:16:24.242 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=0d104e37-7757-42c5-a797-a6f335fe5dac] 2025-06-22 11:16:26.915065 | orchestrator | 11:16:26.914 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-06-22 11:16:26.926134 | orchestrator | 11:16:26.925 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-06-22 11:16:26.929328 | orchestrator | 11:16:26.929 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-06-22 11:16:26.939533 | orchestrator | 11:16:26.939 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-06-22 11:16:26.939665 | orchestrator | 11:16:26.939 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-06-22 11:16:26.939800 | orchestrator | 11:16:26.939 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-06-22 11:16:36.916191 | orchestrator | 11:16:36.915 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-06-22 11:16:36.927164 | orchestrator | 11:16:36.926 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-06-22 11:16:36.930286 | orchestrator | 11:16:36.930 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-06-22 11:16:36.940537 | orchestrator | 11:16:36.940 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-06-22 11:16:36.940664 | orchestrator | 11:16:36.940 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-06-22 11:16:36.940821 | orchestrator | 11:16:36.940 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-06-22 11:16:46.917246 | orchestrator | 11:16:46.916 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-06-22 11:16:46.928258 | orchestrator | 11:16:46.927 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-06-22 11:16:46.930364 | orchestrator | 11:16:46.930 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-06-22 11:16:46.940738 | orchestrator | 11:16:46.940 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-06-22 11:16:46.942084 | orchestrator | 11:16:46.941 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-06-22 11:16:46.942141 | orchestrator | 11:16:46.941 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-06-22 11:16:47.296141 | orchestrator | 11:16:47.295 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 30s [id=8341344e-b005-4338-a819-c8a07ab0b2dc] 2025-06-22 11:16:47.359332 | orchestrator | 11:16:47.358 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=5f311a96-f064-4e57-bb0f-a3f1ec382bb0] 2025-06-22 11:16:47.381853 | orchestrator | 11:16:47.381 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=839b31e2-006f-4c35-825c-a5dc9af0e1ff] 2025-06-22 11:16:47.861168 | orchestrator | 11:16:47.860 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=5ed78173-6d5b-4965-81d9-501e0d9c2815] 2025-06-22 11:16:56.941985 | orchestrator | 11:16:56.941 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2025-06-22 11:16:56.943000 | orchestrator | 11:16:56.942 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2025-06-22 11:16:57.445428 | orchestrator | 11:16:57.444 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 40s [id=a0a52001-7d57-4107-ae82-2bba1c691486] 2025-06-22 11:16:57.860644 | orchestrator | 11:16:57.860 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=87706d00-f38b-48fd-8a55-4e11e164711a] 2025-06-22 11:16:57.887312 | orchestrator | 11:16:57.887 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-06-22 11:16:57.890246 | orchestrator | 11:16:57.890 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-06-22 11:16:57.895315 | orchestrator | 11:16:57.895 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-06-22 11:16:57.895370 | orchestrator | 11:16:57.895 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-06-22 11:16:57.900748 | orchestrator | 11:16:57.900 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=6335819157319477266] 2025-06-22 11:16:57.902196 | orchestrator | 11:16:57.902 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-06-22 11:16:57.903410 | orchestrator | 11:16:57.903 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-06-22 11:16:57.905851 | orchestrator | 11:16:57.905 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-06-22 11:16:57.917073 | orchestrator | 11:16:57.916 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-06-22 11:16:57.919352 | orchestrator | 11:16:57.919 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-06-22 11:16:57.935064 | orchestrator | 11:16:57.934 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-06-22 11:16:57.940674 | orchestrator | 11:16:57.940 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-06-22 11:17:03.250786 | orchestrator | 11:17:03.250 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=8341344e-b005-4338-a819-c8a07ab0b2dc/060f7999-6812-4095-99a7-aa228581a5cf] 2025-06-22 11:17:03.268496 | orchestrator | 11:17:03.268 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=5ed78173-6d5b-4965-81d9-501e0d9c2815/7610229b-d7bf-450f-9964-1d42e936a357] 2025-06-22 11:17:03.282299 | orchestrator | 11:17:03.281 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=a0a52001-7d57-4107-ae82-2bba1c691486/a129606c-fab1-48ed-9350-9d2eafddbd52] 2025-06-22 11:17:03.300391 | orchestrator | 11:17:03.299 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=5ed78173-6d5b-4965-81d9-501e0d9c2815/c288123e-75d1-4d08-8561-55f7fbbd7c1b] 2025-06-22 11:17:03.309059 | orchestrator | 11:17:03.308 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=8341344e-b005-4338-a819-c8a07ab0b2dc/899f0377-b87c-421a-9d44-3bd393f5c125] 2025-06-22 11:17:03.326365 | orchestrator | 11:17:03.326 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=a0a52001-7d57-4107-ae82-2bba1c691486/a273c01c-52c4-42f8-a181-d91a87ff3a5e] 2025-06-22 11:17:03.331323 | orchestrator | 11:17:03.331 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=5ed78173-6d5b-4965-81d9-501e0d9c2815/4b47f8cd-db2a-4bea-898d-3d48c49a84c2] 2025-06-22 11:17:03.359181 | orchestrator | 11:17:03.358 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=8341344e-b005-4338-a819-c8a07ab0b2dc/95ca9be4-ae4c-4603-a11a-c98b5f55b273] 2025-06-22 11:17:03.373793 | orchestrator | 11:17:03.373 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=a0a52001-7d57-4107-ae82-2bba1c691486/0234f42c-6d02-44b8-b796-e801f7c6659f] 2025-06-22 11:17:07.941545 | orchestrator | 11:17:07.941 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-06-22 11:17:17.942705 | orchestrator | 11:17:17.942 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-06-22 11:17:18.338660 | orchestrator | 11:17:18.338 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=67aa0fd7-7e08-46e1-a5e5-07087ff479a4] 2025-06-22 11:17:19.965098 | orchestrator | 11:17:19.964 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-06-22 11:17:19.965207 | orchestrator | 11:17:19.965 STDOUT terraform: Outputs: 2025-06-22 11:17:19.965225 | orchestrator | 11:17:19.965 STDOUT terraform: manager_address = 2025-06-22 11:17:19.965237 | orchestrator | 11:17:19.965 STDOUT terraform: private_key = 2025-06-22 11:17:20.351404 | orchestrator | ok: Runtime: 0:01:47.278112 2025-06-22 11:17:20.393922 | 2025-06-22 11:17:20.394079 | TASK [Fetch manager address] 2025-06-22 11:17:20.848542 | orchestrator | ok 2025-06-22 11:17:20.859121 | 2025-06-22 11:17:20.859256 | TASK [Set manager_host address] 2025-06-22 11:17:20.941670 | orchestrator | ok 2025-06-22 11:17:20.951688 | 2025-06-22 11:17:20.951819 | LOOP [Update ansible collections] 2025-06-22 11:17:22.036995 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-22 11:17:22.037375 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-22 11:17:22.037450 | orchestrator | Starting galaxy collection install process 2025-06-22 11:17:22.037500 | orchestrator | Process install dependency map 2025-06-22 11:17:22.037562 | orchestrator | Starting collection install process 2025-06-22 11:17:22.037614 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2025-06-22 11:17:22.037666 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2025-06-22 11:17:22.037774 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-06-22 11:17:22.037892 | orchestrator | ok: Item: commons Runtime: 0:00:00.760136 2025-06-22 11:17:23.490157 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-22 11:17:23.490367 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-22 11:17:23.490438 | orchestrator | Starting galaxy collection install process 2025-06-22 11:17:23.490492 | orchestrator | Process install dependency map 2025-06-22 11:17:23.490540 | orchestrator | Starting collection install process 2025-06-22 11:17:23.490587 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2025-06-22 11:17:23.490632 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2025-06-22 11:17:23.490675 | orchestrator | osism.services:999.0.0 was installed successfully 2025-06-22 11:17:23.490737 | orchestrator | ok: Item: services Runtime: 0:00:01.190314 2025-06-22 11:17:23.518959 | 2025-06-22 11:17:23.519138 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-22 11:17:34.091996 | orchestrator | ok 2025-06-22 11:17:34.102951 | 2025-06-22 11:17:34.103064 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-22 11:18:34.147559 | orchestrator | ok 2025-06-22 11:18:34.164587 | 2025-06-22 11:18:34.164846 | TASK [Fetch manager ssh hostkey] 2025-06-22 11:18:35.746299 | orchestrator | Output suppressed because no_log was given 2025-06-22 11:18:35.755817 | 2025-06-22 11:18:35.755954 | TASK [Get ssh keypair from terraform environment] 2025-06-22 11:18:36.289023 | orchestrator | ok: Runtime: 0:00:00.008956 2025-06-22 11:18:36.297174 | 2025-06-22 11:18:36.297293 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-22 11:18:36.341292 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-06-22 11:18:36.348449 | 2025-06-22 11:18:36.348555 | TASK [Run manager part 0] 2025-06-22 11:18:37.362143 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-22 11:18:37.405637 | orchestrator | 2025-06-22 11:18:37.405679 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-06-22 11:18:37.405686 | orchestrator | 2025-06-22 11:18:37.405698 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-06-22 11:18:39.063129 | orchestrator | ok: [testbed-manager] 2025-06-22 11:18:39.063185 | orchestrator | 2025-06-22 11:18:39.063213 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-22 11:18:39.063226 | orchestrator | 2025-06-22 11:18:39.063239 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 11:18:41.045074 | orchestrator | ok: [testbed-manager] 2025-06-22 11:18:41.045250 | orchestrator | 2025-06-22 11:18:41.045270 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-22 11:18:41.709667 | orchestrator | ok: [testbed-manager] 2025-06-22 11:18:41.709754 | orchestrator | 2025-06-22 11:18:41.709772 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-22 11:18:41.757469 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:18:41.757513 | orchestrator | 2025-06-22 11:18:41.757522 | orchestrator | TASK [Update package cache] **************************************************** 2025-06-22 11:18:41.789080 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:18:41.789113 | orchestrator | 2025-06-22 11:18:41.789120 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-22 11:18:41.831513 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:18:41.831554 | orchestrator | 2025-06-22 11:18:41.831560 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-22 11:18:41.860127 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:18:41.860162 | orchestrator | 2025-06-22 11:18:41.860169 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-22 11:18:41.893674 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:18:41.893705 | orchestrator | 2025-06-22 11:18:41.893712 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-06-22 11:18:41.927654 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:18:41.927714 | orchestrator | 2025-06-22 11:18:41.927730 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-06-22 11:18:41.953867 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:18:41.953897 | orchestrator | 2025-06-22 11:18:41.953903 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-06-22 11:18:42.755026 | orchestrator | changed: [testbed-manager] 2025-06-22 11:18:42.755074 | orchestrator | 2025-06-22 11:18:42.755081 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-06-22 11:21:55.244809 | orchestrator | changed: [testbed-manager] 2025-06-22 11:21:55.244917 | orchestrator | 2025-06-22 11:21:55.244937 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-22 11:23:07.681536 | orchestrator | changed: [testbed-manager] 2025-06-22 11:23:07.681621 | orchestrator | 2025-06-22 11:23:07.681631 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-22 11:23:27.510478 | orchestrator | changed: [testbed-manager] 2025-06-22 11:23:27.510547 | orchestrator | 2025-06-22 11:23:27.510565 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-22 11:23:35.980949 | orchestrator | changed: [testbed-manager] 2025-06-22 11:23:35.981041 | orchestrator | 2025-06-22 11:23:35.981058 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-22 11:23:36.028198 | orchestrator | ok: [testbed-manager] 2025-06-22 11:23:36.028294 | orchestrator | 2025-06-22 11:23:36.028319 | orchestrator | TASK [Get current user] ******************************************************** 2025-06-22 11:23:36.818879 | orchestrator | ok: [testbed-manager] 2025-06-22 11:23:36.818963 | orchestrator | 2025-06-22 11:23:36.818980 | orchestrator | TASK [Create venv directory] *************************************************** 2025-06-22 11:23:37.558957 | orchestrator | changed: [testbed-manager] 2025-06-22 11:23:37.559705 | orchestrator | 2025-06-22 11:23:37.559734 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-06-22 11:23:44.077726 | orchestrator | changed: [testbed-manager] 2025-06-22 11:23:44.077861 | orchestrator | 2025-06-22 11:23:44.077910 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-06-22 11:23:49.993656 | orchestrator | changed: [testbed-manager] 2025-06-22 11:23:49.993727 | orchestrator | 2025-06-22 11:23:49.993744 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-06-22 11:23:52.571988 | orchestrator | changed: [testbed-manager] 2025-06-22 11:23:52.572030 | orchestrator | 2025-06-22 11:23:52.572039 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-06-22 11:23:54.358299 | orchestrator | changed: [testbed-manager] 2025-06-22 11:23:54.359094 | orchestrator | 2025-06-22 11:23:54.359119 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-06-22 11:23:55.496516 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-22 11:23:55.496567 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-22 11:23:55.496574 | orchestrator | 2025-06-22 11:23:55.496581 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-06-22 11:23:55.541981 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-22 11:23:55.542063 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-22 11:23:55.542070 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-22 11:23:55.542075 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-22 11:24:00.265277 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-22 11:24:00.265325 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-22 11:24:00.265333 | orchestrator | 2025-06-22 11:24:00.265340 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-06-22 11:24:00.798182 | orchestrator | changed: [testbed-manager] 2025-06-22 11:24:00.798221 | orchestrator | 2025-06-22 11:24:00.798228 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-06-22 11:24:21.757390 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-06-22 11:24:21.757457 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-06-22 11:24:21.757464 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-06-22 11:24:21.757469 | orchestrator | 2025-06-22 11:24:21.757474 | orchestrator | TASK [Install local collections] *********************************************** 2025-06-22 11:24:24.000399 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-06-22 11:24:24.000486 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-06-22 11:24:24.000501 | orchestrator | 2025-06-22 11:24:24.000514 | orchestrator | PLAY [Create operator user] **************************************************** 2025-06-22 11:24:24.000526 | orchestrator | 2025-06-22 11:24:24.000537 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 11:24:25.446339 | orchestrator | ok: [testbed-manager] 2025-06-22 11:24:25.446433 | orchestrator | 2025-06-22 11:24:25.446453 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-22 11:24:25.490863 | orchestrator | ok: [testbed-manager] 2025-06-22 11:24:25.490915 | orchestrator | 2025-06-22 11:24:25.490925 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-22 11:24:25.555091 | orchestrator | ok: [testbed-manager] 2025-06-22 11:24:25.555145 | orchestrator | 2025-06-22 11:24:25.555152 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-22 11:24:26.312069 | orchestrator | changed: [testbed-manager] 2025-06-22 11:24:26.312257 | orchestrator | 2025-06-22 11:24:26.312280 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-22 11:24:27.016281 | orchestrator | changed: [testbed-manager] 2025-06-22 11:24:27.016383 | orchestrator | 2025-06-22 11:24:27.016412 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-22 11:24:28.442650 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-06-22 11:24:28.442689 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-06-22 11:24:28.442720 | orchestrator | 2025-06-22 11:24:28.442734 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-22 11:24:29.777532 | orchestrator | changed: [testbed-manager] 2025-06-22 11:24:29.777640 | orchestrator | 2025-06-22 11:24:29.777657 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-22 11:24:31.528741 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 11:24:31.528954 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-06-22 11:24:31.528975 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-06-22 11:24:31.528987 | orchestrator | 2025-06-22 11:24:31.529000 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-06-22 11:24:31.584279 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:24:31.584323 | orchestrator | 2025-06-22 11:24:31.584333 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-22 11:24:32.152938 | orchestrator | changed: [testbed-manager] 2025-06-22 11:24:32.152977 | orchestrator | 2025-06-22 11:24:32.152985 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-22 11:24:32.226847 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:24:32.226886 | orchestrator | 2025-06-22 11:24:32.226895 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-22 11:24:33.406185 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 11:24:33.406238 | orchestrator | changed: [testbed-manager] 2025-06-22 11:24:33.406249 | orchestrator | 2025-06-22 11:24:33.406258 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-22 11:24:33.439466 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:24:33.439512 | orchestrator | 2025-06-22 11:24:33.439520 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-22 11:24:33.470716 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:24:33.470771 | orchestrator | 2025-06-22 11:24:33.470779 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-22 11:24:33.499890 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:24:33.499937 | orchestrator | 2025-06-22 11:24:33.499944 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-22 11:24:33.556593 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:24:33.556672 | orchestrator | 2025-06-22 11:24:33.556717 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-22 11:24:34.277873 | orchestrator | ok: [testbed-manager] 2025-06-22 11:24:34.277906 | orchestrator | 2025-06-22 11:24:34.277912 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-22 11:24:34.277917 | orchestrator | 2025-06-22 11:24:34.277921 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 11:24:35.623048 | orchestrator | ok: [testbed-manager] 2025-06-22 11:24:35.623101 | orchestrator | 2025-06-22 11:24:35.623111 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-06-22 11:24:36.585106 | orchestrator | changed: [testbed-manager] 2025-06-22 11:24:36.585140 | orchestrator | 2025-06-22 11:24:36.585146 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:24:36.585152 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-06-22 11:24:36.585156 | orchestrator | 2025-06-22 11:24:37.112419 | orchestrator | ok: Runtime: 0:06:00.051499 2025-06-22 11:24:37.123837 | 2025-06-22 11:24:37.123984 | TASK [Point out that the log in on the manager is now possible] 2025-06-22 11:24:37.157671 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-06-22 11:24:37.166046 | 2025-06-22 11:24:37.166180 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-22 11:24:37.203387 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-06-22 11:24:37.212484 | 2025-06-22 11:24:37.212615 | TASK [Run manager part 1 + 2] 2025-06-22 11:24:38.057158 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-22 11:24:38.114147 | orchestrator | 2025-06-22 11:24:38.114200 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-06-22 11:24:38.114208 | orchestrator | 2025-06-22 11:24:38.114219 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 11:24:40.577295 | orchestrator | ok: [testbed-manager] 2025-06-22 11:24:40.577379 | orchestrator | 2025-06-22 11:24:40.577431 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-22 11:24:40.612457 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:24:40.612512 | orchestrator | 2025-06-22 11:24:40.612521 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-22 11:24:40.653963 | orchestrator | ok: [testbed-manager] 2025-06-22 11:24:40.654030 | orchestrator | 2025-06-22 11:24:40.654044 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-22 11:24:40.694259 | orchestrator | ok: [testbed-manager] 2025-06-22 11:24:40.694419 | orchestrator | 2025-06-22 11:24:40.694441 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-22 11:24:40.770152 | orchestrator | ok: [testbed-manager] 2025-06-22 11:24:40.770230 | orchestrator | 2025-06-22 11:24:40.770248 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-22 11:24:40.841060 | orchestrator | ok: [testbed-manager] 2025-06-22 11:24:40.841199 | orchestrator | 2025-06-22 11:24:40.841220 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-22 11:24:40.890192 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-06-22 11:24:40.890289 | orchestrator | 2025-06-22 11:24:40.890317 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-22 11:24:41.585111 | orchestrator | ok: [testbed-manager] 2025-06-22 11:24:41.585166 | orchestrator | 2025-06-22 11:24:41.585177 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-22 11:24:41.631604 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:24:41.631652 | orchestrator | 2025-06-22 11:24:41.631662 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-22 11:24:42.973314 | orchestrator | changed: [testbed-manager] 2025-06-22 11:24:42.973371 | orchestrator | 2025-06-22 11:24:42.973382 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-22 11:24:43.549010 | orchestrator | ok: [testbed-manager] 2025-06-22 11:24:43.549065 | orchestrator | 2025-06-22 11:24:43.549071 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-22 11:24:44.637812 | orchestrator | changed: [testbed-manager] 2025-06-22 11:24:44.637960 | orchestrator | 2025-06-22 11:24:44.637969 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-22 11:24:57.917930 | orchestrator | changed: [testbed-manager] 2025-06-22 11:24:57.918054 | orchestrator | 2025-06-22 11:24:57.918074 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-22 11:24:58.588710 | orchestrator | ok: [testbed-manager] 2025-06-22 11:24:58.588796 | orchestrator | 2025-06-22 11:24:58.588815 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-22 11:24:58.644794 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:24:58.644832 | orchestrator | 2025-06-22 11:24:58.644840 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-06-22 11:24:59.618729 | orchestrator | changed: [testbed-manager] 2025-06-22 11:24:59.618818 | orchestrator | 2025-06-22 11:24:59.618834 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-06-22 11:25:00.586070 | orchestrator | changed: [testbed-manager] 2025-06-22 11:25:00.586158 | orchestrator | 2025-06-22 11:25:00.586174 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-06-22 11:25:01.152368 | orchestrator | changed: [testbed-manager] 2025-06-22 11:25:01.152458 | orchestrator | 2025-06-22 11:25:01.152474 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-06-22 11:25:01.194911 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-22 11:25:01.194974 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-22 11:25:01.194981 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-22 11:25:01.194986 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-22 11:25:04.333058 | orchestrator | changed: [testbed-manager] 2025-06-22 11:25:04.333151 | orchestrator | 2025-06-22 11:25:04.333168 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-06-22 11:25:13.327065 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-06-22 11:25:13.327171 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-06-22 11:25:13.327192 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-06-22 11:25:13.327206 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-06-22 11:25:13.327227 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-06-22 11:25:13.327241 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-06-22 11:25:13.327254 | orchestrator | 2025-06-22 11:25:13.327268 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-06-22 11:25:14.360279 | orchestrator | changed: [testbed-manager] 2025-06-22 11:25:14.360374 | orchestrator | 2025-06-22 11:25:14.360391 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-06-22 11:25:14.404898 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:25:14.404963 | orchestrator | 2025-06-22 11:25:14.404974 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-06-22 11:25:17.483777 | orchestrator | changed: [testbed-manager] 2025-06-22 11:25:17.483844 | orchestrator | 2025-06-22 11:25:17.483859 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-06-22 11:25:17.525221 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:25:17.525257 | orchestrator | 2025-06-22 11:25:17.525265 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-06-22 11:26:54.307498 | orchestrator | changed: [testbed-manager] 2025-06-22 11:26:54.307602 | orchestrator | 2025-06-22 11:26:54.307623 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-22 11:26:55.467138 | orchestrator | ok: [testbed-manager] 2025-06-22 11:26:55.467216 | orchestrator | 2025-06-22 11:26:55.467240 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:26:55.467262 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-06-22 11:26:55.467282 | orchestrator | 2025-06-22 11:26:55.842895 | orchestrator | ok: Runtime: 0:02:18.068388 2025-06-22 11:26:55.860024 | 2025-06-22 11:26:55.860198 | TASK [Reboot manager] 2025-06-22 11:26:57.399287 | orchestrator | ok: Runtime: 0:00:00.956783 2025-06-22 11:26:57.413461 | 2025-06-22 11:26:57.413606 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-22 11:27:11.434600 | orchestrator | ok 2025-06-22 11:27:11.445396 | 2025-06-22 11:27:11.445568 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-22 11:28:11.489668 | orchestrator | ok 2025-06-22 11:28:11.498949 | 2025-06-22 11:28:11.499096 | TASK [Deploy manager + bootstrap nodes] 2025-06-22 11:28:13.852508 | orchestrator | 2025-06-22 11:28:13.852740 | orchestrator | # DEPLOY MANAGER 2025-06-22 11:28:13.852766 | orchestrator | 2025-06-22 11:28:13.852782 | orchestrator | + set -e 2025-06-22 11:28:13.852796 | orchestrator | + echo 2025-06-22 11:28:13.852811 | orchestrator | + echo '# DEPLOY MANAGER' 2025-06-22 11:28:13.852829 | orchestrator | + echo 2025-06-22 11:28:13.852879 | orchestrator | + cat /opt/manager-vars.sh 2025-06-22 11:28:13.856251 | orchestrator | export NUMBER_OF_NODES=6 2025-06-22 11:28:13.856370 | orchestrator | 2025-06-22 11:28:13.856389 | orchestrator | export CEPH_VERSION=reef 2025-06-22 11:28:13.856404 | orchestrator | export CONFIGURATION_VERSION=main 2025-06-22 11:28:13.856416 | orchestrator | export MANAGER_VERSION=9.1.0 2025-06-22 11:28:13.856464 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-06-22 11:28:13.856475 | orchestrator | 2025-06-22 11:28:13.856494 | orchestrator | export ARA=false 2025-06-22 11:28:13.856505 | orchestrator | export DEPLOY_MODE=manager 2025-06-22 11:28:13.856522 | orchestrator | export TEMPEST=false 2025-06-22 11:28:13.856534 | orchestrator | export IS_ZUUL=true 2025-06-22 11:28:13.856545 | orchestrator | 2025-06-22 11:28:13.856563 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.200 2025-06-22 11:28:13.856574 | orchestrator | export EXTERNAL_API=false 2025-06-22 11:28:13.856585 | orchestrator | 2025-06-22 11:28:13.856595 | orchestrator | export IMAGE_USER=ubuntu 2025-06-22 11:28:13.856609 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-06-22 11:28:13.856619 | orchestrator | 2025-06-22 11:28:13.856630 | orchestrator | export CEPH_STACK=ceph-ansible 2025-06-22 11:28:13.856650 | orchestrator | 2025-06-22 11:28:13.856661 | orchestrator | + echo 2025-06-22 11:28:13.856678 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 11:28:13.857101 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 11:28:13.857134 | orchestrator | ++ INTERACTIVE=false 2025-06-22 11:28:13.857149 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 11:28:13.857163 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 11:28:13.857175 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 11:28:13.857185 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 11:28:13.857197 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 11:28:13.857207 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 11:28:13.857218 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 11:28:13.857229 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 11:28:13.857240 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 11:28:13.857257 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 11:28:13.857268 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 11:28:13.857279 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 11:28:13.857301 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 11:28:13.857313 | orchestrator | ++ export ARA=false 2025-06-22 11:28:13.857356 | orchestrator | ++ ARA=false 2025-06-22 11:28:13.857367 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 11:28:13.857378 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 11:28:13.857388 | orchestrator | ++ export TEMPEST=false 2025-06-22 11:28:13.857399 | orchestrator | ++ TEMPEST=false 2025-06-22 11:28:13.857409 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 11:28:13.857420 | orchestrator | ++ IS_ZUUL=true 2025-06-22 11:28:13.857430 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.200 2025-06-22 11:28:13.857441 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.200 2025-06-22 11:28:13.857451 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 11:28:13.857462 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 11:28:13.857472 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 11:28:13.857483 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 11:28:13.857493 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 11:28:13.857504 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 11:28:13.857519 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 11:28:13.857530 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 11:28:13.857541 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-06-22 11:28:13.915982 | orchestrator | + docker version 2025-06-22 11:28:14.180574 | orchestrator | Client: Docker Engine - Community 2025-06-22 11:28:14.180682 | orchestrator | Version: 27.5.1 2025-06-22 11:28:14.180700 | orchestrator | API version: 1.47 2025-06-22 11:28:14.180713 | orchestrator | Go version: go1.22.11 2025-06-22 11:28:14.180724 | orchestrator | Git commit: 9f9e405 2025-06-22 11:28:14.180735 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-22 11:28:14.180747 | orchestrator | OS/Arch: linux/amd64 2025-06-22 11:28:14.180757 | orchestrator | Context: default 2025-06-22 11:28:14.180769 | orchestrator | 2025-06-22 11:28:14.180780 | orchestrator | Server: Docker Engine - Community 2025-06-22 11:28:14.180791 | orchestrator | Engine: 2025-06-22 11:28:14.180803 | orchestrator | Version: 27.5.1 2025-06-22 11:28:14.180814 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-06-22 11:28:14.180855 | orchestrator | Go version: go1.22.11 2025-06-22 11:28:14.180866 | orchestrator | Git commit: 4c9b3b0 2025-06-22 11:28:14.180877 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-22 11:28:14.180888 | orchestrator | OS/Arch: linux/amd64 2025-06-22 11:28:14.180899 | orchestrator | Experimental: false 2025-06-22 11:28:14.180910 | orchestrator | containerd: 2025-06-22 11:28:14.180922 | orchestrator | Version: 1.7.27 2025-06-22 11:28:14.180933 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-06-22 11:28:14.180957 | orchestrator | runc: 2025-06-22 11:28:14.180969 | orchestrator | Version: 1.2.5 2025-06-22 11:28:14.180980 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-06-22 11:28:14.180991 | orchestrator | docker-init: 2025-06-22 11:28:14.181002 | orchestrator | Version: 0.19.0 2025-06-22 11:28:14.181014 | orchestrator | GitCommit: de40ad0 2025-06-22 11:28:14.184542 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-06-22 11:28:14.195771 | orchestrator | + set -e 2025-06-22 11:28:14.195799 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 11:28:14.195811 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 11:28:14.195822 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 11:28:14.195833 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 11:28:14.195844 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 11:28:14.195854 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 11:28:14.195865 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 11:28:14.195876 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 11:28:14.195887 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 11:28:14.195897 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 11:28:14.195908 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 11:28:14.195918 | orchestrator | ++ export ARA=false 2025-06-22 11:28:14.195930 | orchestrator | ++ ARA=false 2025-06-22 11:28:14.195940 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 11:28:14.195951 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 11:28:14.195962 | orchestrator | ++ export TEMPEST=false 2025-06-22 11:28:14.195972 | orchestrator | ++ TEMPEST=false 2025-06-22 11:28:14.195982 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 11:28:14.195993 | orchestrator | ++ IS_ZUUL=true 2025-06-22 11:28:14.196004 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.200 2025-06-22 11:28:14.196015 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.200 2025-06-22 11:28:14.196025 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 11:28:14.196036 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 11:28:14.196052 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 11:28:14.196063 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 11:28:14.196074 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 11:28:14.196084 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 11:28:14.196095 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 11:28:14.196106 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 11:28:14.196117 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 11:28:14.196127 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 11:28:14.196138 | orchestrator | ++ INTERACTIVE=false 2025-06-22 11:28:14.196148 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 11:28:14.196164 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 11:28:14.196290 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-22 11:28:14.196306 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.1.0 2025-06-22 11:28:14.203702 | orchestrator | + set -e 2025-06-22 11:28:14.203781 | orchestrator | + VERSION=9.1.0 2025-06-22 11:28:14.203799 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-06-22 11:28:14.212223 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-22 11:28:14.212300 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-22 11:28:14.218077 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-22 11:28:14.223484 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-06-22 11:28:14.232260 | orchestrator | /opt/configuration ~ 2025-06-22 11:28:14.232300 | orchestrator | + set -e 2025-06-22 11:28:14.232313 | orchestrator | + pushd /opt/configuration 2025-06-22 11:28:14.232347 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-22 11:28:14.235791 | orchestrator | + source /opt/venv/bin/activate 2025-06-22 11:28:14.236756 | orchestrator | ++ deactivate nondestructive 2025-06-22 11:28:14.236780 | orchestrator | ++ '[' -n '' ']' 2025-06-22 11:28:14.236796 | orchestrator | ++ '[' -n '' ']' 2025-06-22 11:28:14.236834 | orchestrator | ++ hash -r 2025-06-22 11:28:14.236887 | orchestrator | ++ '[' -n '' ']' 2025-06-22 11:28:14.236900 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-22 11:28:14.236911 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-22 11:28:14.236926 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-22 11:28:14.236941 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-22 11:28:14.236952 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-22 11:28:14.237040 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-22 11:28:14.237055 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-22 11:28:14.237493 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 11:28:14.237555 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 11:28:14.237568 | orchestrator | ++ export PATH 2025-06-22 11:28:14.237580 | orchestrator | ++ '[' -n '' ']' 2025-06-22 11:28:14.237688 | orchestrator | ++ '[' -z '' ']' 2025-06-22 11:28:14.237707 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-22 11:28:14.237718 | orchestrator | ++ PS1='(venv) ' 2025-06-22 11:28:14.237776 | orchestrator | ++ export PS1 2025-06-22 11:28:14.237790 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-22 11:28:14.237800 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-22 11:28:14.238137 | orchestrator | ++ hash -r 2025-06-22 11:28:14.238229 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-06-22 11:28:15.254996 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-06-22 11:28:15.256024 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.4) 2025-06-22 11:28:15.257398 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-06-22 11:28:15.258670 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-06-22 11:28:15.259650 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-06-22 11:28:15.269532 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-06-22 11:28:15.270957 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-06-22 11:28:15.272101 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-06-22 11:28:15.273581 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-06-22 11:28:15.307749 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-06-22 11:28:15.309107 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-06-22 11:28:15.310806 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.5.0) 2025-06-22 11:28:15.312142 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.6.15) 2025-06-22 11:28:15.316411 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-06-22 11:28:15.526969 | orchestrator | ++ which gilt 2025-06-22 11:28:15.530735 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-06-22 11:28:15.530779 | orchestrator | + /opt/venv/bin/gilt overlay 2025-06-22 11:28:15.747595 | orchestrator | osism.cfg-generics: 2025-06-22 11:28:15.900867 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-06-22 11:28:15.901020 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-06-22 11:28:15.901063 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-06-22 11:28:15.901310 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-06-22 11:28:16.716789 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-06-22 11:28:16.726813 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-06-22 11:28:17.042106 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-06-22 11:28:17.090069 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-22 11:28:17.090148 | orchestrator | + deactivate 2025-06-22 11:28:17.090162 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-22 11:28:17.090176 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 11:28:17.090187 | orchestrator | + export PATH 2025-06-22 11:28:17.090198 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-22 11:28:17.090220 | orchestrator | ~ 2025-06-22 11:28:17.090232 | orchestrator | + '[' -n '' ']' 2025-06-22 11:28:17.090246 | orchestrator | + hash -r 2025-06-22 11:28:17.090257 | orchestrator | + '[' -n '' ']' 2025-06-22 11:28:17.090268 | orchestrator | + unset VIRTUAL_ENV 2025-06-22 11:28:17.090279 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-22 11:28:17.090290 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-22 11:28:17.090301 | orchestrator | + unset -f deactivate 2025-06-22 11:28:17.090312 | orchestrator | + popd 2025-06-22 11:28:17.091818 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-06-22 11:28:17.091837 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-06-22 11:28:17.092442 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-22 11:28:17.150111 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-22 11:28:17.150168 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-06-22 11:28:17.150176 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-06-22 11:28:17.242477 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-22 11:28:17.242795 | orchestrator | + source /opt/venv/bin/activate 2025-06-22 11:28:17.242814 | orchestrator | ++ deactivate nondestructive 2025-06-22 11:28:17.242857 | orchestrator | ++ '[' -n '' ']' 2025-06-22 11:28:17.242894 | orchestrator | ++ '[' -n '' ']' 2025-06-22 11:28:17.242906 | orchestrator | ++ hash -r 2025-06-22 11:28:17.242917 | orchestrator | ++ '[' -n '' ']' 2025-06-22 11:28:17.242929 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-22 11:28:17.242939 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-22 11:28:17.242979 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-22 11:28:17.243005 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-22 11:28:17.243017 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-22 11:28:17.243028 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-22 11:28:17.243039 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-22 11:28:17.243051 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 11:28:17.243063 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 11:28:17.243099 | orchestrator | ++ export PATH 2025-06-22 11:28:17.243124 | orchestrator | ++ '[' -n '' ']' 2025-06-22 11:28:17.243136 | orchestrator | ++ '[' -z '' ']' 2025-06-22 11:28:17.243147 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-22 11:28:17.243158 | orchestrator | ++ PS1='(venv) ' 2025-06-22 11:28:17.243168 | orchestrator | ++ export PS1 2025-06-22 11:28:17.243210 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-22 11:28:17.243223 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-22 11:28:17.243234 | orchestrator | ++ hash -r 2025-06-22 11:28:17.243457 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-06-22 11:28:18.334622 | orchestrator | 2025-06-22 11:28:18.334726 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-06-22 11:28:18.334741 | orchestrator | 2025-06-22 11:28:18.334753 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-22 11:28:18.886592 | orchestrator | ok: [testbed-manager] 2025-06-22 11:28:18.886694 | orchestrator | 2025-06-22 11:28:18.886711 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-22 11:28:19.882143 | orchestrator | changed: [testbed-manager] 2025-06-22 11:28:19.882248 | orchestrator | 2025-06-22 11:28:19.882265 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-06-22 11:28:19.882277 | orchestrator | 2025-06-22 11:28:19.882288 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 11:28:22.122180 | orchestrator | ok: [testbed-manager] 2025-06-22 11:28:22.122261 | orchestrator | 2025-06-22 11:28:22.122271 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-06-22 11:28:22.166748 | orchestrator | ok: [testbed-manager] 2025-06-22 11:28:22.166831 | orchestrator | 2025-06-22 11:28:22.166847 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-06-22 11:28:22.637367 | orchestrator | changed: [testbed-manager] 2025-06-22 11:28:22.637471 | orchestrator | 2025-06-22 11:28:22.637489 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-06-22 11:28:22.676718 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:28:22.676794 | orchestrator | 2025-06-22 11:28:22.676807 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-22 11:28:23.007579 | orchestrator | changed: [testbed-manager] 2025-06-22 11:28:23.007677 | orchestrator | 2025-06-22 11:28:23.007694 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-06-22 11:28:23.062823 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:28:23.062910 | orchestrator | 2025-06-22 11:28:23.062926 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-06-22 11:28:23.378702 | orchestrator | ok: [testbed-manager] 2025-06-22 11:28:23.378800 | orchestrator | 2025-06-22 11:28:23.378816 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-06-22 11:28:23.471552 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:28:23.471646 | orchestrator | 2025-06-22 11:28:23.471660 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-06-22 11:28:23.471672 | orchestrator | 2025-06-22 11:28:23.471683 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 11:28:25.274701 | orchestrator | ok: [testbed-manager] 2025-06-22 11:28:25.274835 | orchestrator | 2025-06-22 11:28:25.274862 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-06-22 11:28:25.364852 | orchestrator | included: osism.services.traefik for testbed-manager 2025-06-22 11:28:25.364947 | orchestrator | 2025-06-22 11:28:25.364961 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-06-22 11:28:25.419530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-06-22 11:28:25.419585 | orchestrator | 2025-06-22 11:28:25.419642 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-06-22 11:28:26.494839 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-06-22 11:28:26.494938 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-06-22 11:28:26.494954 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-06-22 11:28:26.494965 | orchestrator | 2025-06-22 11:28:26.494977 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-06-22 11:28:28.291386 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-06-22 11:28:28.291493 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-06-22 11:28:28.291508 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-06-22 11:28:28.291520 | orchestrator | 2025-06-22 11:28:28.291532 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-06-22 11:28:28.919922 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 11:28:28.919998 | orchestrator | changed: [testbed-manager] 2025-06-22 11:28:28.920012 | orchestrator | 2025-06-22 11:28:28.920024 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-06-22 11:28:29.565375 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 11:28:29.565474 | orchestrator | changed: [testbed-manager] 2025-06-22 11:28:29.565489 | orchestrator | 2025-06-22 11:28:29.565501 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-06-22 11:28:29.621290 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:28:29.621379 | orchestrator | 2025-06-22 11:28:29.621394 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-06-22 11:28:29.989831 | orchestrator | ok: [testbed-manager] 2025-06-22 11:28:29.989928 | orchestrator | 2025-06-22 11:28:29.989944 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-06-22 11:28:30.064498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-06-22 11:28:30.064594 | orchestrator | 2025-06-22 11:28:30.064612 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-06-22 11:28:31.056797 | orchestrator | changed: [testbed-manager] 2025-06-22 11:28:31.056896 | orchestrator | 2025-06-22 11:28:31.056912 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-06-22 11:28:31.824891 | orchestrator | changed: [testbed-manager] 2025-06-22 11:28:31.824988 | orchestrator | 2025-06-22 11:28:31.825003 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-06-22 11:28:44.607212 | orchestrator | changed: [testbed-manager] 2025-06-22 11:28:44.607287 | orchestrator | 2025-06-22 11:28:44.607320 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-06-22 11:28:44.648219 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:28:44.648276 | orchestrator | 2025-06-22 11:28:44.648282 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-06-22 11:28:44.648287 | orchestrator | 2025-06-22 11:28:44.648291 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 11:28:46.419855 | orchestrator | ok: [testbed-manager] 2025-06-22 11:28:46.419951 | orchestrator | 2025-06-22 11:28:46.419966 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-06-22 11:28:46.528808 | orchestrator | included: osism.services.manager for testbed-manager 2025-06-22 11:28:46.528851 | orchestrator | 2025-06-22 11:28:46.528863 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-06-22 11:28:46.585005 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 11:28:46.585071 | orchestrator | 2025-06-22 11:28:46.585084 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-06-22 11:28:48.997483 | orchestrator | ok: [testbed-manager] 2025-06-22 11:28:48.997589 | orchestrator | 2025-06-22 11:28:48.997606 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-06-22 11:28:49.050134 | orchestrator | ok: [testbed-manager] 2025-06-22 11:28:49.050176 | orchestrator | 2025-06-22 11:28:49.050188 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-06-22 11:28:49.172883 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-06-22 11:28:49.172948 | orchestrator | 2025-06-22 11:28:49.172961 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-06-22 11:28:51.971926 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-06-22 11:28:51.972036 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-06-22 11:28:51.972050 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-06-22 11:28:51.972063 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-06-22 11:28:51.972074 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-06-22 11:28:51.972085 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-06-22 11:28:51.972096 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-06-22 11:28:51.972107 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-06-22 11:28:51.972119 | orchestrator | 2025-06-22 11:28:51.972133 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-06-22 11:28:52.569373 | orchestrator | changed: [testbed-manager] 2025-06-22 11:28:52.569472 | orchestrator | 2025-06-22 11:28:52.569487 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-06-22 11:28:53.179978 | orchestrator | changed: [testbed-manager] 2025-06-22 11:28:53.180079 | orchestrator | 2025-06-22 11:28:53.180095 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-06-22 11:28:53.256511 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-06-22 11:28:53.256639 | orchestrator | 2025-06-22 11:28:53.256665 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-06-22 11:28:54.458626 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-06-22 11:28:54.458736 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-06-22 11:28:54.458751 | orchestrator | 2025-06-22 11:28:54.458764 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-06-22 11:28:55.073389 | orchestrator | changed: [testbed-manager] 2025-06-22 11:28:55.073490 | orchestrator | 2025-06-22 11:28:55.073505 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-06-22 11:28:55.123564 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:28:55.123608 | orchestrator | 2025-06-22 11:28:55.123622 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-06-22 11:28:55.197219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-06-22 11:28:55.197274 | orchestrator | 2025-06-22 11:28:55.197287 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-06-22 11:28:56.520732 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 11:28:56.520834 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 11:28:56.520849 | orchestrator | changed: [testbed-manager] 2025-06-22 11:28:56.520863 | orchestrator | 2025-06-22 11:28:56.520875 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-06-22 11:28:57.147899 | orchestrator | changed: [testbed-manager] 2025-06-22 11:28:57.147998 | orchestrator | 2025-06-22 11:28:57.148015 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-06-22 11:28:57.200890 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:28:57.200973 | orchestrator | 2025-06-22 11:28:57.200987 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-06-22 11:28:57.298502 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-06-22 11:28:57.298562 | orchestrator | 2025-06-22 11:28:57.298576 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-06-22 11:28:57.811070 | orchestrator | changed: [testbed-manager] 2025-06-22 11:28:57.811175 | orchestrator | 2025-06-22 11:28:57.811193 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-06-22 11:28:58.206963 | orchestrator | changed: [testbed-manager] 2025-06-22 11:28:58.207065 | orchestrator | 2025-06-22 11:28:58.207082 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-06-22 11:28:59.412170 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-06-22 11:28:59.412282 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-06-22 11:28:59.412349 | orchestrator | 2025-06-22 11:28:59.412367 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-06-22 11:29:00.035285 | orchestrator | changed: [testbed-manager] 2025-06-22 11:29:00.035443 | orchestrator | 2025-06-22 11:29:00.035461 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-06-22 11:29:00.445212 | orchestrator | ok: [testbed-manager] 2025-06-22 11:29:00.445288 | orchestrator | 2025-06-22 11:29:00.445296 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-06-22 11:29:00.813498 | orchestrator | changed: [testbed-manager] 2025-06-22 11:29:00.813613 | orchestrator | 2025-06-22 11:29:00.813637 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-06-22 11:29:00.863735 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:29:00.863832 | orchestrator | 2025-06-22 11:29:00.863852 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-06-22 11:29:00.949546 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-06-22 11:29:00.949650 | orchestrator | 2025-06-22 11:29:00.949672 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-06-22 11:29:00.988517 | orchestrator | ok: [testbed-manager] 2025-06-22 11:29:00.988576 | orchestrator | 2025-06-22 11:29:00.988596 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-06-22 11:29:03.025937 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-06-22 11:29:03.026118 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-06-22 11:29:03.026135 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-06-22 11:29:03.026177 | orchestrator | 2025-06-22 11:29:03.026189 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-06-22 11:29:03.717841 | orchestrator | changed: [testbed-manager] 2025-06-22 11:29:03.717953 | orchestrator | 2025-06-22 11:29:03.717976 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-06-22 11:29:04.398849 | orchestrator | changed: [testbed-manager] 2025-06-22 11:29:04.398949 | orchestrator | 2025-06-22 11:29:04.398965 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-06-22 11:29:05.147622 | orchestrator | changed: [testbed-manager] 2025-06-22 11:29:05.147723 | orchestrator | 2025-06-22 11:29:05.147738 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-06-22 11:29:05.222132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-06-22 11:29:05.222231 | orchestrator | 2025-06-22 11:29:05.222245 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-06-22 11:29:05.267863 | orchestrator | ok: [testbed-manager] 2025-06-22 11:29:05.267947 | orchestrator | 2025-06-22 11:29:05.267962 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-06-22 11:29:06.005546 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-06-22 11:29:06.005667 | orchestrator | 2025-06-22 11:29:06.005676 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-06-22 11:29:06.090572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-06-22 11:29:06.090641 | orchestrator | 2025-06-22 11:29:06.090655 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-06-22 11:29:06.853556 | orchestrator | changed: [testbed-manager] 2025-06-22 11:29:06.853663 | orchestrator | 2025-06-22 11:29:06.853680 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-06-22 11:29:07.450123 | orchestrator | ok: [testbed-manager] 2025-06-22 11:29:07.450225 | orchestrator | 2025-06-22 11:29:07.450240 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-06-22 11:29:07.504223 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:29:07.504292 | orchestrator | 2025-06-22 11:29:07.504341 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-06-22 11:29:07.564536 | orchestrator | ok: [testbed-manager] 2025-06-22 11:29:07.564596 | orchestrator | 2025-06-22 11:29:07.564609 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-06-22 11:29:08.367696 | orchestrator | changed: [testbed-manager] 2025-06-22 11:29:08.367799 | orchestrator | 2025-06-22 11:29:08.367816 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-06-22 11:30:23.640757 | orchestrator | changed: [testbed-manager] 2025-06-22 11:30:23.640875 | orchestrator | 2025-06-22 11:30:23.640892 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-06-22 11:30:24.765461 | orchestrator | ok: [testbed-manager] 2025-06-22 11:30:24.765564 | orchestrator | 2025-06-22 11:30:24.765580 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-06-22 11:30:24.829187 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:30:24.829268 | orchestrator | 2025-06-22 11:30:24.829282 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-06-22 11:30:27.213514 | orchestrator | changed: [testbed-manager] 2025-06-22 11:30:27.213625 | orchestrator | 2025-06-22 11:30:27.213643 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-06-22 11:30:27.283521 | orchestrator | ok: [testbed-manager] 2025-06-22 11:30:27.283629 | orchestrator | 2025-06-22 11:30:27.283646 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-22 11:30:27.283659 | orchestrator | 2025-06-22 11:30:27.283670 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-06-22 11:30:27.345873 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:30:27.345954 | orchestrator | 2025-06-22 11:30:27.345993 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-06-22 11:31:27.398752 | orchestrator | Pausing for 60 seconds 2025-06-22 11:31:27.398849 | orchestrator | changed: [testbed-manager] 2025-06-22 11:31:27.398860 | orchestrator | 2025-06-22 11:31:27.398869 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-06-22 11:31:33.040236 | orchestrator | changed: [testbed-manager] 2025-06-22 11:31:33.040353 | orchestrator | 2025-06-22 11:31:33.040439 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-06-22 11:32:14.638948 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-06-22 11:32:14.639070 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-06-22 11:32:14.639085 | orchestrator | changed: [testbed-manager] 2025-06-22 11:32:14.639099 | orchestrator | 2025-06-22 11:32:14.639111 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-06-22 11:32:22.893581 | orchestrator | changed: [testbed-manager] 2025-06-22 11:32:22.893694 | orchestrator | 2025-06-22 11:32:22.893731 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-06-22 11:32:22.974503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-06-22 11:32:22.974586 | orchestrator | 2025-06-22 11:32:22.974596 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-22 11:32:22.974604 | orchestrator | 2025-06-22 11:32:22.974611 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-06-22 11:32:23.020977 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:32:23.021036 | orchestrator | 2025-06-22 11:32:23.021043 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:32:23.021051 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-22 11:32:23.021056 | orchestrator | 2025-06-22 11:32:23.124762 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-22 11:32:23.124848 | orchestrator | + deactivate 2025-06-22 11:32:23.124862 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-22 11:32:23.124875 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 11:32:23.124886 | orchestrator | + export PATH 2025-06-22 11:32:23.124902 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-22 11:32:23.124915 | orchestrator | + '[' -n '' ']' 2025-06-22 11:32:23.125164 | orchestrator | + hash -r 2025-06-22 11:32:23.125186 | orchestrator | + '[' -n '' ']' 2025-06-22 11:32:23.125197 | orchestrator | + unset VIRTUAL_ENV 2025-06-22 11:32:23.125209 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-22 11:32:23.125220 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-22 11:32:23.125231 | orchestrator | + unset -f deactivate 2025-06-22 11:32:23.125243 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-06-22 11:32:23.133356 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-22 11:32:23.133399 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-22 11:32:23.133411 | orchestrator | + local max_attempts=60 2025-06-22 11:32:23.133423 | orchestrator | + local name=ceph-ansible 2025-06-22 11:32:23.133434 | orchestrator | + local attempt_num=1 2025-06-22 11:32:23.133846 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 11:32:23.162891 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 11:32:23.162931 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-22 11:32:23.162942 | orchestrator | + local max_attempts=60 2025-06-22 11:32:23.162954 | orchestrator | + local name=kolla-ansible 2025-06-22 11:32:23.162965 | orchestrator | + local attempt_num=1 2025-06-22 11:32:23.163632 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-22 11:32:23.190485 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 11:32:23.190535 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-22 11:32:23.190547 | orchestrator | + local max_attempts=60 2025-06-22 11:32:23.190559 | orchestrator | + local name=osism-ansible 2025-06-22 11:32:23.190570 | orchestrator | + local attempt_num=1 2025-06-22 11:32:23.191242 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-22 11:32:23.218461 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 11:32:23.218549 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-22 11:32:23.218565 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-22 11:32:23.869787 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-06-22 11:32:24.051044 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-06-22 11:32:24.051139 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-06-22 11:32:24.051155 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-06-22 11:32:24.051166 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-06-22 11:32:24.051179 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-06-22 11:32:24.051190 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-06-22 11:32:24.051200 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-06-22 11:32:24.051211 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2025-06-22 11:32:24.051222 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-06-22 11:32:24.051233 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-06-22 11:32:24.051243 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-06-22 11:32:24.051254 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-06-22 11:32:24.051265 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-06-22 11:32:24.051276 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-06-22 11:32:24.051286 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-06-22 11:32:24.061171 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-22 11:32:24.116634 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-22 11:32:24.116722 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-06-22 11:32:24.121566 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-06-22 11:32:25.824240 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:32:25.824413 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:32:25.824432 | orchestrator | Registering Redlock._release_script 2025-06-22 11:32:26.005002 | orchestrator | 2025-06-22 11:32:26 | INFO  | Task 9166f1b4-4203-45a1-9951-59739c31f08c (resolvconf) was prepared for execution. 2025-06-22 11:32:26.005087 | orchestrator | 2025-06-22 11:32:26 | INFO  | It takes a moment until task 9166f1b4-4203-45a1-9951-59739c31f08c (resolvconf) has been started and output is visible here. 2025-06-22 11:32:29.889362 | orchestrator | 2025-06-22 11:32:29.889449 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-06-22 11:32:29.890414 | orchestrator | 2025-06-22 11:32:29.891773 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 11:32:29.894067 | orchestrator | Sunday 22 June 2025 11:32:29 +0000 (0:00:00.144) 0:00:00.144 *********** 2025-06-22 11:32:34.467846 | orchestrator | ok: [testbed-manager] 2025-06-22 11:32:34.468854 | orchestrator | 2025-06-22 11:32:34.470071 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-22 11:32:34.470835 | orchestrator | Sunday 22 June 2025 11:32:34 +0000 (0:00:04.581) 0:00:04.726 *********** 2025-06-22 11:32:34.533886 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:32:34.534836 | orchestrator | 2025-06-22 11:32:34.536410 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-22 11:32:34.537168 | orchestrator | Sunday 22 June 2025 11:32:34 +0000 (0:00:00.066) 0:00:04.792 *********** 2025-06-22 11:32:34.611646 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-06-22 11:32:34.612432 | orchestrator | 2025-06-22 11:32:34.613532 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-22 11:32:34.614371 | orchestrator | Sunday 22 June 2025 11:32:34 +0000 (0:00:00.078) 0:00:04.870 *********** 2025-06-22 11:32:34.702982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 11:32:34.703686 | orchestrator | 2025-06-22 11:32:34.703883 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-22 11:32:34.705019 | orchestrator | Sunday 22 June 2025 11:32:34 +0000 (0:00:00.090) 0:00:04.961 *********** 2025-06-22 11:32:35.752964 | orchestrator | ok: [testbed-manager] 2025-06-22 11:32:35.753467 | orchestrator | 2025-06-22 11:32:35.754670 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-22 11:32:35.755050 | orchestrator | Sunday 22 June 2025 11:32:35 +0000 (0:00:01.049) 0:00:06.010 *********** 2025-06-22 11:32:35.814447 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:32:35.814529 | orchestrator | 2025-06-22 11:32:35.815409 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-22 11:32:35.816373 | orchestrator | Sunday 22 June 2025 11:32:35 +0000 (0:00:00.061) 0:00:06.071 *********** 2025-06-22 11:32:36.286165 | orchestrator | ok: [testbed-manager] 2025-06-22 11:32:36.287460 | orchestrator | 2025-06-22 11:32:36.287928 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-22 11:32:36.288675 | orchestrator | Sunday 22 June 2025 11:32:36 +0000 (0:00:00.472) 0:00:06.544 *********** 2025-06-22 11:32:36.364998 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:32:36.366122 | orchestrator | 2025-06-22 11:32:36.366395 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-22 11:32:36.366629 | orchestrator | Sunday 22 June 2025 11:32:36 +0000 (0:00:00.079) 0:00:06.623 *********** 2025-06-22 11:32:36.888799 | orchestrator | changed: [testbed-manager] 2025-06-22 11:32:36.889526 | orchestrator | 2025-06-22 11:32:36.890499 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-22 11:32:36.891335 | orchestrator | Sunday 22 June 2025 11:32:36 +0000 (0:00:00.522) 0:00:07.145 *********** 2025-06-22 11:32:37.887232 | orchestrator | changed: [testbed-manager] 2025-06-22 11:32:37.887562 | orchestrator | 2025-06-22 11:32:37.888056 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-22 11:32:37.888766 | orchestrator | Sunday 22 June 2025 11:32:37 +0000 (0:00:00.998) 0:00:08.144 *********** 2025-06-22 11:32:38.827534 | orchestrator | ok: [testbed-manager] 2025-06-22 11:32:38.828186 | orchestrator | 2025-06-22 11:32:38.828800 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-22 11:32:38.829627 | orchestrator | Sunday 22 June 2025 11:32:38 +0000 (0:00:00.938) 0:00:09.083 *********** 2025-06-22 11:32:38.908413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-06-22 11:32:38.908497 | orchestrator | 2025-06-22 11:32:38.909635 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-22 11:32:38.910727 | orchestrator | Sunday 22 June 2025 11:32:38 +0000 (0:00:00.082) 0:00:09.165 *********** 2025-06-22 11:32:39.991951 | orchestrator | changed: [testbed-manager] 2025-06-22 11:32:39.992061 | orchestrator | 2025-06-22 11:32:39.992882 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:32:39.993148 | orchestrator | 2025-06-22 11:32:39 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:32:39.993637 | orchestrator | 2025-06-22 11:32:39 | INFO  | Please wait and do not abort execution. 2025-06-22 11:32:39.994901 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 11:32:39.996200 | orchestrator | 2025-06-22 11:32:39.996221 | orchestrator | 2025-06-22 11:32:39.996233 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:32:39.996546 | orchestrator | Sunday 22 June 2025 11:32:39 +0000 (0:00:01.084) 0:00:10.249 *********** 2025-06-22 11:32:39.996870 | orchestrator | =============================================================================== 2025-06-22 11:32:39.997389 | orchestrator | Gathering Facts --------------------------------------------------------- 4.58s 2025-06-22 11:32:39.997729 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.08s 2025-06-22 11:32:39.998608 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.05s 2025-06-22 11:32:39.998931 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.00s 2025-06-22 11:32:39.998951 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.94s 2025-06-22 11:32:39.999240 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2025-06-22 11:32:39.999614 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.47s 2025-06-22 11:32:39.999637 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-06-22 11:32:40.000055 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-06-22 11:32:40.000357 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-06-22 11:32:40.000717 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-06-22 11:32:40.001005 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-06-22 11:32:40.001311 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-06-22 11:32:40.431742 | orchestrator | + osism apply sshconfig 2025-06-22 11:32:42.088786 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:32:42.088888 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:32:42.088904 | orchestrator | Registering Redlock._release_script 2025-06-22 11:32:42.141208 | orchestrator | 2025-06-22 11:32:42 | INFO  | Task a289ac50-b749-4b21-a6f8-b530f9540c02 (sshconfig) was prepared for execution. 2025-06-22 11:32:42.141332 | orchestrator | 2025-06-22 11:32:42 | INFO  | It takes a moment until task a289ac50-b749-4b21-a6f8-b530f9540c02 (sshconfig) has been started and output is visible here. 2025-06-22 11:32:46.046199 | orchestrator | 2025-06-22 11:32:46.047314 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-06-22 11:32:46.049569 | orchestrator | 2025-06-22 11:32:46.051022 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-06-22 11:32:46.051857 | orchestrator | Sunday 22 June 2025 11:32:46 +0000 (0:00:00.157) 0:00:00.157 *********** 2025-06-22 11:32:46.600642 | orchestrator | ok: [testbed-manager] 2025-06-22 11:32:46.600743 | orchestrator | 2025-06-22 11:32:46.602715 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-06-22 11:32:46.602978 | orchestrator | Sunday 22 June 2025 11:32:46 +0000 (0:00:00.556) 0:00:00.714 *********** 2025-06-22 11:32:47.098678 | orchestrator | changed: [testbed-manager] 2025-06-22 11:32:47.099348 | orchestrator | 2025-06-22 11:32:47.100347 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-06-22 11:32:47.100925 | orchestrator | Sunday 22 June 2025 11:32:47 +0000 (0:00:00.496) 0:00:01.211 *********** 2025-06-22 11:32:52.821331 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-06-22 11:32:52.821446 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-06-22 11:32:52.821579 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-06-22 11:32:52.823489 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-06-22 11:32:52.824093 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-06-22 11:32:52.824681 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-06-22 11:32:52.825222 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-06-22 11:32:52.825793 | orchestrator | 2025-06-22 11:32:52.827758 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-06-22 11:32:52.828082 | orchestrator | Sunday 22 June 2025 11:32:52 +0000 (0:00:05.717) 0:00:06.929 *********** 2025-06-22 11:32:52.886651 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:32:52.887110 | orchestrator | 2025-06-22 11:32:52.888191 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-06-22 11:32:52.888933 | orchestrator | Sunday 22 June 2025 11:32:52 +0000 (0:00:00.069) 0:00:06.998 *********** 2025-06-22 11:32:53.457515 | orchestrator | changed: [testbed-manager] 2025-06-22 11:32:53.457905 | orchestrator | 2025-06-22 11:32:53.458474 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:32:53.458825 | orchestrator | 2025-06-22 11:32:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:32:53.459013 | orchestrator | 2025-06-22 11:32:53 | INFO  | Please wait and do not abort execution. 2025-06-22 11:32:53.460079 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 11:32:53.460614 | orchestrator | 2025-06-22 11:32:53.461122 | orchestrator | 2025-06-22 11:32:53.461978 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:32:53.462825 | orchestrator | Sunday 22 June 2025 11:32:53 +0000 (0:00:00.571) 0:00:07.570 *********** 2025-06-22 11:32:53.463248 | orchestrator | =============================================================================== 2025-06-22 11:32:53.463864 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.72s 2025-06-22 11:32:53.464563 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2025-06-22 11:32:53.465023 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2025-06-22 11:32:53.465525 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.50s 2025-06-22 11:32:53.466463 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-06-22 11:32:53.933473 | orchestrator | + osism apply known-hosts 2025-06-22 11:32:55.618567 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:32:55.618682 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:32:55.618697 | orchestrator | Registering Redlock._release_script 2025-06-22 11:32:55.683319 | orchestrator | 2025-06-22 11:32:55 | INFO  | Task dc44183a-cd34-428c-a424-6010ede1a20a (known-hosts) was prepared for execution. 2025-06-22 11:32:55.683401 | orchestrator | 2025-06-22 11:32:55 | INFO  | It takes a moment until task dc44183a-cd34-428c-a424-6010ede1a20a (known-hosts) has been started and output is visible here. 2025-06-22 11:32:59.574779 | orchestrator | 2025-06-22 11:32:59.575381 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-06-22 11:32:59.576915 | orchestrator | 2025-06-22 11:32:59.578695 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-06-22 11:32:59.579503 | orchestrator | Sunday 22 June 2025 11:32:59 +0000 (0:00:00.162) 0:00:00.162 *********** 2025-06-22 11:33:05.402358 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-22 11:33:05.403460 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-22 11:33:05.404185 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-22 11:33:05.405866 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-22 11:33:05.406464 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-22 11:33:05.407582 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-22 11:33:05.408675 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-22 11:33:05.409426 | orchestrator | 2025-06-22 11:33:05.410005 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-06-22 11:33:05.410518 | orchestrator | Sunday 22 June 2025 11:33:05 +0000 (0:00:05.829) 0:00:05.991 *********** 2025-06-22 11:33:05.570792 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-22 11:33:05.571006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-22 11:33:05.571371 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-22 11:33:05.572122 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-22 11:33:05.573056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-22 11:33:05.573816 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-22 11:33:05.574426 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-22 11:33:05.574967 | orchestrator | 2025-06-22 11:33:05.575731 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 11:33:05.575827 | orchestrator | Sunday 22 June 2025 11:33:05 +0000 (0:00:00.167) 0:00:06.159 *********** 2025-06-22 11:33:06.718731 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJfHVTDOhNcaN4UFpCE56W08g++tZ1NtmzbGnvDvi3sf) 2025-06-22 11:33:06.719627 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsSanduwKe85Nk0HFyQmxf8xFedD4ZkktvG0pwvrpiTABNLbsWNN1Cy3chb7McMdXAqLMXcF7FohDINThu3stiSpWjSez+C8JR6xS+mtxBxNg5uz17AHI58WxVq3+eCbHaCU3DHLEzZmr8mpMBuE3uJM/7je5eAcmDZXS99yQG8w63U0zDDgEi6C48zOgJKN7dAqGg+/rUpTtrQoiYIN8JzxlCsWK8+ef36iA8K8bquumNOg5i/KkSzZ3RK9bioqK/5MSV25nXDYZa7WaLjDAzGLv1Jtv4wKMiY8Qnh5E8yf71naTI5R/Nth5R+K/239++tSnMRBPvCi5+t3R3DPXvXe52mAW1kwpHQ41HUUINFMI6ddegKWKKwF8zxLSh7nn00AmlRl4tGmCIUWba7wXFJ2YMXVOQjm/6IxrQlc+MbTRXOOOWYAjF8+ipW8sAcjD9BnCgZytS0vJWyqpBkfkYlvo9SG8WhC399iuZ+YdipUAjQ3NxK4khWp/J4pBA6j0=) 2025-06-22 11:33:06.720991 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJnWMGoLkAXnLHVFpgFPr9kEin8TXIYN5lvrGXUFHkSj8YjZs+QMZ0XrLef130sbAIYHdtVo0ZvwdwpiwkBCihM=) 2025-06-22 11:33:06.721369 | orchestrator | 2025-06-22 11:33:06.722134 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 11:33:06.722934 | orchestrator | Sunday 22 June 2025 11:33:06 +0000 (0:00:01.149) 0:00:07.308 *********** 2025-06-22 11:33:07.730620 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCV9TlJrqAsz9W7fJAc6xJI+OnPyS+pBRMi4HN+vyAFCTxir80bNeqG+6wrcRO3d/F7N7OZjN44vJFUefrNwXdxcirx1bZIE5Tj1QzhbyrYyzdHoo8QjN2CpCXD0LIQaSytwO0JDRt5l24vyYpKy8FgcQMt3aUdh2KBy3fE0M1HyHZG61CDc9Bcad3lPwjAmkeVydtCXdRSotMttQQOSsYlqh0kglWpeoNhGeA1wGYiGN0vOeopE7GXmitmlzehHOcuZZ+hfsv6Byv66R2uLUOb0OGqn+aDWyX0we7L+zv302ZuzFIQ3wnnrBS5bNwulHFsfMbnqVJnYXcA9qMA/buJrV/yoprjHAzf+j3sx2Z+dSTHlINIdSQNdgWont8TKLzon9brVd3UeX3jM2eBb/uW4d9fCxT8I4xOEwH8eAJ0qw/k/LPmkDDe9Z9wUcPxf5WtWBmYGolnNIq+Is80qCyNomblXLdpY+XxIJTuX5LSVTKWm9Rzt4IDkBN8VfWc/Xs=) 2025-06-22 11:33:07.731374 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMUBZ5EKl4cAU/s81Fg5GkNntpynOII4qxdT8f1nb739z4m8vzFaXLJNuo3gcOOz6+U4bCM/xB3ST8DtKfFtbSQ=) 2025-06-22 11:33:07.732195 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFqPPEsuSgU77s7Q49zttXxE32+bWzmcpgikCOTXb8Yz) 2025-06-22 11:33:07.732717 | orchestrator | 2025-06-22 11:33:07.733213 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 11:33:07.733745 | orchestrator | Sunday 22 June 2025 11:33:07 +0000 (0:00:01.011) 0:00:08.320 *********** 2025-06-22 11:33:08.760675 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOZ1S2bN4tdG/EJJM25cVuH5Um0zQxynVik4Q3PLVWbeAXTQSfUnWgrbi73zPbcSFL2YfriPNK75DnLTH8Sypmw=) 2025-06-22 11:33:08.761184 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQcYJd/3Vzl/XgxOQkl7+pe9/FymudHF5APk1PvUyu5vMFN11mRRX3RNYy4nkJGo50jyJVc5bkTnAtBxdaL6h3qTSDzwilHwIA4mqPupUhYLiP9xrp+HtkTgv3zMvS018+nQ+IxqE/+Fgf0sKVf1EbTu2MKPFuTK5pPsCjW40x7tWpqo9W9F0lHl7uhcK0I8Q/beYOuT1W2LK3lzJeE7+CkPCyIV3IBr/6ijQCWnn0oasM4eThM0GJDIU5Y5YZq5CLGk91LrNErZQAgdCmRaIP19k6QptslH7hE0PKkwOnwM2FxK2RmVJn8zZN+32urYRGKjCgPAZag+QTOiHwuPaXqK6lZxUfvRxrSaYela8bXrb5dzFWUKG0pf7ihkELoglZnb1hIOOAqacSDjKu2HNEEYy8pEcgUKu2MRAbwwNb9IkQHYdLIKw+riC7QnFpLwpcHwE+tbpF6E+tEAtWgSXYh5SEXzpqQ4e6wECZPLUlDa+vxjnKiFu3cB901/awEm0=) 2025-06-22 11:33:08.762522 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB9sUKCGt2IOMSr3VYpxJDQ5DJ+3yFUwfsRB1RbJzBVK) 2025-06-22 11:33:08.762576 | orchestrator | 2025-06-22 11:33:08.762854 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 11:33:08.763443 | orchestrator | Sunday 22 June 2025 11:33:08 +0000 (0:00:01.030) 0:00:09.350 *********** 2025-06-22 11:33:09.817475 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtyzZdiKLQV80wrmvjAmBmlu6zo6YW/ZDRYJyVTbcknoIYvxzkhZ9T/ZkzJOxRD7KyeOvTg7Gjw1vFwJu9+AqYivyl04JP3wzJu9P9Pfp+R+n3h+OSr7EKEN32S7J7z53KukSFW1HlYuc8QEFndQfgeheSI5l0l3tLufW3En1E2YSncDwTnhz99Yi/FrhejqQmRxJ4HaKwdAdfo3s0c8Eqjnev4A5VFHmBSopQVgiu2kdk2xq7m4G3FH4NT4k3tf3bCLjajDcmJnrRgwor+oT7utq+Xj3gQ7+vcVNSkIoc0jBWixv1i7vusaSayr21EePr+bzD741v4a1a54C2KZpLOV7r8HzWql7lskiYVksOJ0aHiB99lL0z0jzL5lip3haVyThnoyhfGCWXLxxTvZkw2ZxRGeTgTBfHtfrq0XZXCRznn9WTs/CyTmi/6iooxkFJNhpFIhpRYR+1pwcLJwvVBhrulxPMAe0jysEDnKUbBHBB9R1GhRr9+BxuFaoRuK0=) 2025-06-22 11:33:09.817724 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEY8oiFAB58MiQ4KrHwSqSqLDw6iQJm70VPauBsS9r9LuIfMzp5f5r4wP+0Z/k5fZF7k/pq10+XT2uzFUnoqVHw=) 2025-06-22 11:33:09.818331 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMDZFSBXy/WSl2K7y7N1YJ12gF3b7D+Wo42VSi7axZfB) 2025-06-22 11:33:09.819185 | orchestrator | 2025-06-22 11:33:09.819417 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 11:33:09.820093 | orchestrator | Sunday 22 June 2025 11:33:09 +0000 (0:00:01.056) 0:00:10.407 *********** 2025-06-22 11:33:10.809592 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChgak0wSkFVtajQ0vJDnRCVzyNYYIEuSd/lA+CwhcTaM+Zfu4DomX9K9lP6JKU+MgqkLwTj7oDWXyTn+A0n0X4e8OqHttscEOhkpTbrrf/bbjf+ooi60TwS84h2gholUIEh+QKflESdQ1CAgWuhKLBEb/v3OdkmicOuq/YHTwmO7GcZ30sklzc7mjlEz5NC2BiAQIhPpr+dTtD+Xui1jUlavZNCLC4b4bFt0wj4gBKPhr4U5gpPrfeX+JqXgohiV5qNCRzUXXyMzvdr8Yr+EDmk84o5wp0dXl9AFEs+H/rYxJXNQGVv01ZmJ55ljrVSi1pg9GRlHfkxa6xxIP45GxkRKe1yJ3NI4Ap2WgUhIr+10e5js+tvqWHNA+3rhzIlFdin8arnhbCVnONwAsPBQaDfVgP+mIsSthOElusDIgQto3JS7H4g2e6wCHio2SMm3N6t3Mnu2+GFC9rEOLAcxCCeUmhCaOooH3jhyG5glN/plOHVa7NflU9jp27zQo296k=) 2025-06-22 11:33:10.809701 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOAzxfHW0G8Yh9ejlKbzlTBxGitTMzyeJWl5XobgzAtN/rEB5CnQv7RiepRemzegloyLyUKBX202LpOUOeh48iM=) 2025-06-22 11:33:10.809790 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOE48umCDwAj8tOBWGOaA1a4KulZRUOr4Bv16v0jT/oK) 2025-06-22 11:33:10.810334 | orchestrator | 2025-06-22 11:33:10.811350 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 11:33:10.811962 | orchestrator | Sunday 22 June 2025 11:33:10 +0000 (0:00:00.988) 0:00:11.396 *********** 2025-06-22 11:33:11.843708 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOERzT482Goie5UbUaZAIwiY6QlkJy0H1B4HDut4h+FvM8c/R6uviLD5BgCczxjcKhKLCu4HzP+Ejd2n1CNAWUg=) 2025-06-22 11:33:11.844787 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhjf6BjzT8PN8jKr6apcuXX249x8csk2VzbmFR4uMfhTM0z8tS6g10WCLTj3AZLnZxvshIRCtla0sPvv61iYZDJaYT5OeeDKED8+lLskX0S7QAHvqyhOCm65gaGQ2+QeW5Nl76cAHV8AUnavb5NzLneZN+/oRO60PnMTF/jLJqwr60j+w6/1xH9I6K6QX8qIPBv93kWgxxidNY1Ox9+ByhtjmMrgeaOuYYUMO5UxTK7qsoznbHPVAzABHBXgaI4NmPjXkWsZHT+tu/3l8EajitJCYkfWe20DykdB8pS+loAS2gdDKyN5zbCaErSmQKlCSjs4kW9UriKgVvRLXKPcsqVgFOA8UqwCrRYpuFVQv/GBDYDjp2C1EEQgRatizVOeIEUmIMzP7D/kvwdw0XAzPoTGXw44Aqhf1TGzmd/mXioZq9a739HNLfTb67yoYAqR7vNa0KWfYgKvn6qLE4u4zvOv0xmwsfo7Pu2kIdLI+kBDoDiuePW5TeBFkK7sT3hUs=) 2025-06-22 11:33:11.845621 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy67PhroXSlPjYKCnZoQfcI7oz0EyoapGqkRcNTATA3) 2025-06-22 11:33:11.846320 | orchestrator | 2025-06-22 11:33:11.846967 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 11:33:11.847623 | orchestrator | Sunday 22 June 2025 11:33:11 +0000 (0:00:01.036) 0:00:12.432 *********** 2025-06-22 11:33:12.869967 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDH84c63/YuGbMzBlrnPZ3CWVsthdM/baaKIPOMIEBIG0rZMN6vtm0c5RC1JdSwwS+XObkCU3r+YwbnSPXAYIsqdxcSnNohiRBAJlP/PpTp/8oNsQ5OXTiD5H4zBfaE1sOoNqdQRW1EwsLFa2BLJqysSyTyLoTIIe/wJrXl3XMqtzOO8ic5KguGK0BoJ3AMJWFwLQOAhNesnVn+SzlBSY/aZ868AHhZi9fGAQEakWPzkaOKYc0AxauejBPMgZkYTjrOA6aO21Hp+2b8OtJ2MaYD0VeOoxQIFpzPQgegxvFI1HZwWESFc7OCB2bMj12GYXTwL1grlXIlMR/Rn3C/EqgKcLfd1eLwsIs2cRlu4K5bjZibq64UevpDNUDD2Ft//Sx0yImZVMouk+FabrjQRYRmqiGXqqZmRem5bDOckeT5kND+kiKyZgsRuiGSWNCm5Z6kcbE4E3T/OapmBaP5iwnYu1lQgTbDnRO60zwvu5g7FINVJbfQ4Cc0u3lHciQNUvU=) 2025-06-22 11:33:12.870773 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKoWR/ntCPeUv0aM0EgSx6Z6Q3DK5QpsEaHty8QAe61Ra1ifqB2Pd7BdwgUKJsJM/mYA7DPkheWHQXrsnGS+yr0=) 2025-06-22 11:33:12.872022 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL6rCGLqgRR4N844y+5669nYj2yRanJ9zWVbYgq4HhdK) 2025-06-22 11:33:12.872778 | orchestrator | 2025-06-22 11:33:12.873271 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-06-22 11:33:12.873933 | orchestrator | Sunday 22 June 2025 11:33:12 +0000 (0:00:01.026) 0:00:13.459 *********** 2025-06-22 11:33:18.036606 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-22 11:33:18.038211 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-22 11:33:18.038293 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-22 11:33:18.038935 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-22 11:33:18.039607 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-22 11:33:18.040252 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-22 11:33:18.040985 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-22 11:33:18.041962 | orchestrator | 2025-06-22 11:33:18.043674 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-06-22 11:33:18.043950 | orchestrator | Sunday 22 June 2025 11:33:18 +0000 (0:00:05.166) 0:00:18.626 *********** 2025-06-22 11:33:18.205806 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-22 11:33:18.206712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-22 11:33:18.207409 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-22 11:33:18.209204 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-22 11:33:18.209346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-22 11:33:18.209602 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-22 11:33:18.210095 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-22 11:33:18.210791 | orchestrator | 2025-06-22 11:33:18.211904 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 11:33:18.212630 | orchestrator | Sunday 22 June 2025 11:33:18 +0000 (0:00:00.170) 0:00:18.796 *********** 2025-06-22 11:33:19.206467 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJfHVTDOhNcaN4UFpCE56W08g++tZ1NtmzbGnvDvi3sf) 2025-06-22 11:33:19.207650 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsSanduwKe85Nk0HFyQmxf8xFedD4ZkktvG0pwvrpiTABNLbsWNN1Cy3chb7McMdXAqLMXcF7FohDINThu3stiSpWjSez+C8JR6xS+mtxBxNg5uz17AHI58WxVq3+eCbHaCU3DHLEzZmr8mpMBuE3uJM/7je5eAcmDZXS99yQG8w63U0zDDgEi6C48zOgJKN7dAqGg+/rUpTtrQoiYIN8JzxlCsWK8+ef36iA8K8bquumNOg5i/KkSzZ3RK9bioqK/5MSV25nXDYZa7WaLjDAzGLv1Jtv4wKMiY8Qnh5E8yf71naTI5R/Nth5R+K/239++tSnMRBPvCi5+t3R3DPXvXe52mAW1kwpHQ41HUUINFMI6ddegKWKKwF8zxLSh7nn00AmlRl4tGmCIUWba7wXFJ2YMXVOQjm/6IxrQlc+MbTRXOOOWYAjF8+ipW8sAcjD9BnCgZytS0vJWyqpBkfkYlvo9SG8WhC399iuZ+YdipUAjQ3NxK4khWp/J4pBA6j0=) 2025-06-22 11:33:19.207968 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJnWMGoLkAXnLHVFpgFPr9kEin8TXIYN5lvrGXUFHkSj8YjZs+QMZ0XrLef130sbAIYHdtVo0ZvwdwpiwkBCihM=) 2025-06-22 11:33:19.208986 | orchestrator | 2025-06-22 11:33:19.210401 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 11:33:19.210553 | orchestrator | Sunday 22 June 2025 11:33:19 +0000 (0:00:01.000) 0:00:19.796 *********** 2025-06-22 11:33:20.232790 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMUBZ5EKl4cAU/s81Fg5GkNntpynOII4qxdT8f1nb739z4m8vzFaXLJNuo3gcOOz6+U4bCM/xB3ST8DtKfFtbSQ=) 2025-06-22 11:33:20.233404 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFqPPEsuSgU77s7Q49zttXxE32+bWzmcpgikCOTXb8Yz) 2025-06-22 11:33:20.234777 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCV9TlJrqAsz9W7fJAc6xJI+OnPyS+pBRMi4HN+vyAFCTxir80bNeqG+6wrcRO3d/F7N7OZjN44vJFUefrNwXdxcirx1bZIE5Tj1QzhbyrYyzdHoo8QjN2CpCXD0LIQaSytwO0JDRt5l24vyYpKy8FgcQMt3aUdh2KBy3fE0M1HyHZG61CDc9Bcad3lPwjAmkeVydtCXdRSotMttQQOSsYlqh0kglWpeoNhGeA1wGYiGN0vOeopE7GXmitmlzehHOcuZZ+hfsv6Byv66R2uLUOb0OGqn+aDWyX0we7L+zv302ZuzFIQ3wnnrBS5bNwulHFsfMbnqVJnYXcA9qMA/buJrV/yoprjHAzf+j3sx2Z+dSTHlINIdSQNdgWont8TKLzon9brVd3UeX3jM2eBb/uW4d9fCxT8I4xOEwH8eAJ0qw/k/LPmkDDe9Z9wUcPxf5WtWBmYGolnNIq+Is80qCyNomblXLdpY+XxIJTuX5LSVTKWm9Rzt4IDkBN8VfWc/Xs=) 2025-06-22 11:33:20.235425 | orchestrator | 2025-06-22 11:33:20.236004 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 11:33:20.236475 | orchestrator | Sunday 22 June 2025 11:33:20 +0000 (0:00:01.025) 0:00:20.821 *********** 2025-06-22 11:33:21.267014 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQcYJd/3Vzl/XgxOQkl7+pe9/FymudHF5APk1PvUyu5vMFN11mRRX3RNYy4nkJGo50jyJVc5bkTnAtBxdaL6h3qTSDzwilHwIA4mqPupUhYLiP9xrp+HtkTgv3zMvS018+nQ+IxqE/+Fgf0sKVf1EbTu2MKPFuTK5pPsCjW40x7tWpqo9W9F0lHl7uhcK0I8Q/beYOuT1W2LK3lzJeE7+CkPCyIV3IBr/6ijQCWnn0oasM4eThM0GJDIU5Y5YZq5CLGk91LrNErZQAgdCmRaIP19k6QptslH7hE0PKkwOnwM2FxK2RmVJn8zZN+32urYRGKjCgPAZag+QTOiHwuPaXqK6lZxUfvRxrSaYela8bXrb5dzFWUKG0pf7ihkELoglZnb1hIOOAqacSDjKu2HNEEYy8pEcgUKu2MRAbwwNb9IkQHYdLIKw+riC7QnFpLwpcHwE+tbpF6E+tEAtWgSXYh5SEXzpqQ4e6wECZPLUlDa+vxjnKiFu3cB901/awEm0=) 2025-06-22 11:33:21.268804 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB9sUKCGt2IOMSr3VYpxJDQ5DJ+3yFUwfsRB1RbJzBVK) 2025-06-22 11:33:21.270165 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOZ1S2bN4tdG/EJJM25cVuH5Um0zQxynVik4Q3PLVWbeAXTQSfUnWgrbi73zPbcSFL2YfriPNK75DnLTH8Sypmw=) 2025-06-22 11:33:21.270793 | orchestrator | 2025-06-22 11:33:21.271608 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 11:33:21.272271 | orchestrator | Sunday 22 June 2025 11:33:21 +0000 (0:00:01.035) 0:00:21.856 *********** 2025-06-22 11:33:22.309880 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtyzZdiKLQV80wrmvjAmBmlu6zo6YW/ZDRYJyVTbcknoIYvxzkhZ9T/ZkzJOxRD7KyeOvTg7Gjw1vFwJu9+AqYivyl04JP3wzJu9P9Pfp+R+n3h+OSr7EKEN32S7J7z53KukSFW1HlYuc8QEFndQfgeheSI5l0l3tLufW3En1E2YSncDwTnhz99Yi/FrhejqQmRxJ4HaKwdAdfo3s0c8Eqjnev4A5VFHmBSopQVgiu2kdk2xq7m4G3FH4NT4k3tf3bCLjajDcmJnrRgwor+oT7utq+Xj3gQ7+vcVNSkIoc0jBWixv1i7vusaSayr21EePr+bzD741v4a1a54C2KZpLOV7r8HzWql7lskiYVksOJ0aHiB99lL0z0jzL5lip3haVyThnoyhfGCWXLxxTvZkw2ZxRGeTgTBfHtfrq0XZXCRznn9WTs/CyTmi/6iooxkFJNhpFIhpRYR+1pwcLJwvVBhrulxPMAe0jysEDnKUbBHBB9R1GhRr9+BxuFaoRuK0=) 2025-06-22 11:33:22.310100 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEY8oiFAB58MiQ4KrHwSqSqLDw6iQJm70VPauBsS9r9LuIfMzp5f5r4wP+0Z/k5fZF7k/pq10+XT2uzFUnoqVHw=) 2025-06-22 11:33:22.310856 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMDZFSBXy/WSl2K7y7N1YJ12gF3b7D+Wo42VSi7axZfB) 2025-06-22 11:33:22.311249 | orchestrator | 2025-06-22 11:33:22.311707 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 11:33:22.312131 | orchestrator | Sunday 22 June 2025 11:33:22 +0000 (0:00:01.043) 0:00:22.899 *********** 2025-06-22 11:33:23.341129 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChgak0wSkFVtajQ0vJDnRCVzyNYYIEuSd/lA+CwhcTaM+Zfu4DomX9K9lP6JKU+MgqkLwTj7oDWXyTn+A0n0X4e8OqHttscEOhkpTbrrf/bbjf+ooi60TwS84h2gholUIEh+QKflESdQ1CAgWuhKLBEb/v3OdkmicOuq/YHTwmO7GcZ30sklzc7mjlEz5NC2BiAQIhPpr+dTtD+Xui1jUlavZNCLC4b4bFt0wj4gBKPhr4U5gpPrfeX+JqXgohiV5qNCRzUXXyMzvdr8Yr+EDmk84o5wp0dXl9AFEs+H/rYxJXNQGVv01ZmJ55ljrVSi1pg9GRlHfkxa6xxIP45GxkRKe1yJ3NI4Ap2WgUhIr+10e5js+tvqWHNA+3rhzIlFdin8arnhbCVnONwAsPBQaDfVgP+mIsSthOElusDIgQto3JS7H4g2e6wCHio2SMm3N6t3Mnu2+GFC9rEOLAcxCCeUmhCaOooH3jhyG5glN/plOHVa7NflU9jp27zQo296k=) 2025-06-22 11:33:23.341279 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOAzxfHW0G8Yh9ejlKbzlTBxGitTMzyeJWl5XobgzAtN/rEB5CnQv7RiepRemzegloyLyUKBX202LpOUOeh48iM=) 2025-06-22 11:33:23.341375 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOE48umCDwAj8tOBWGOaA1a4KulZRUOr4Bv16v0jT/oK) 2025-06-22 11:33:23.341393 | orchestrator | 2025-06-22 11:33:23.341635 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 11:33:23.342243 | orchestrator | Sunday 22 June 2025 11:33:23 +0000 (0:00:01.028) 0:00:23.928 *********** 2025-06-22 11:33:24.363480 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy67PhroXSlPjYKCnZoQfcI7oz0EyoapGqkRcNTATA3) 2025-06-22 11:33:24.366426 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhjf6BjzT8PN8jKr6apcuXX249x8csk2VzbmFR4uMfhTM0z8tS6g10WCLTj3AZLnZxvshIRCtla0sPvv61iYZDJaYT5OeeDKED8+lLskX0S7QAHvqyhOCm65gaGQ2+QeW5Nl76cAHV8AUnavb5NzLneZN+/oRO60PnMTF/jLJqwr60j+w6/1xH9I6K6QX8qIPBv93kWgxxidNY1Ox9+ByhtjmMrgeaOuYYUMO5UxTK7qsoznbHPVAzABHBXgaI4NmPjXkWsZHT+tu/3l8EajitJCYkfWe20DykdB8pS+loAS2gdDKyN5zbCaErSmQKlCSjs4kW9UriKgVvRLXKPcsqVgFOA8UqwCrRYpuFVQv/GBDYDjp2C1EEQgRatizVOeIEUmIMzP7D/kvwdw0XAzPoTGXw44Aqhf1TGzmd/mXioZq9a739HNLfTb67yoYAqR7vNa0KWfYgKvn6qLE4u4zvOv0xmwsfo7Pu2kIdLI+kBDoDiuePW5TeBFkK7sT3hUs=) 2025-06-22 11:33:24.366509 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOERzT482Goie5UbUaZAIwiY6QlkJy0H1B4HDut4h+FvM8c/R6uviLD5BgCczxjcKhKLCu4HzP+Ejd2n1CNAWUg=) 2025-06-22 11:33:24.366527 | orchestrator | 2025-06-22 11:33:24.366540 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 11:33:24.366554 | orchestrator | Sunday 22 June 2025 11:33:24 +0000 (0:00:01.023) 0:00:24.952 *********** 2025-06-22 11:33:25.387111 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKoWR/ntCPeUv0aM0EgSx6Z6Q3DK5QpsEaHty8QAe61Ra1ifqB2Pd7BdwgUKJsJM/mYA7DPkheWHQXrsnGS+yr0=) 2025-06-22 11:33:25.387623 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDH84c63/YuGbMzBlrnPZ3CWVsthdM/baaKIPOMIEBIG0rZMN6vtm0c5RC1JdSwwS+XObkCU3r+YwbnSPXAYIsqdxcSnNohiRBAJlP/PpTp/8oNsQ5OXTiD5H4zBfaE1sOoNqdQRW1EwsLFa2BLJqysSyTyLoTIIe/wJrXl3XMqtzOO8ic5KguGK0BoJ3AMJWFwLQOAhNesnVn+SzlBSY/aZ868AHhZi9fGAQEakWPzkaOKYc0AxauejBPMgZkYTjrOA6aO21Hp+2b8OtJ2MaYD0VeOoxQIFpzPQgegxvFI1HZwWESFc7OCB2bMj12GYXTwL1grlXIlMR/Rn3C/EqgKcLfd1eLwsIs2cRlu4K5bjZibq64UevpDNUDD2Ft//Sx0yImZVMouk+FabrjQRYRmqiGXqqZmRem5bDOckeT5kND+kiKyZgsRuiGSWNCm5Z6kcbE4E3T/OapmBaP5iwnYu1lQgTbDnRO60zwvu5g7FINVJbfQ4Cc0u3lHciQNUvU=) 2025-06-22 11:33:25.388019 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL6rCGLqgRR4N844y+5669nYj2yRanJ9zWVbYgq4HhdK) 2025-06-22 11:33:25.388817 | orchestrator | 2025-06-22 11:33:25.389132 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-06-22 11:33:25.389536 | orchestrator | Sunday 22 June 2025 11:33:25 +0000 (0:00:01.022) 0:00:25.975 *********** 2025-06-22 11:33:25.526248 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-22 11:33:25.526401 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-22 11:33:25.526856 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-22 11:33:25.527311 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-22 11:33:25.528022 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-22 11:33:25.529903 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-22 11:33:25.530116 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-22 11:33:25.530710 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:33:25.531264 | orchestrator | 2025-06-22 11:33:25.531889 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-06-22 11:33:25.532301 | orchestrator | Sunday 22 June 2025 11:33:25 +0000 (0:00:00.142) 0:00:26.117 *********** 2025-06-22 11:33:25.580577 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:33:25.581246 | orchestrator | 2025-06-22 11:33:25.582188 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-06-22 11:33:25.582879 | orchestrator | Sunday 22 June 2025 11:33:25 +0000 (0:00:00.053) 0:00:26.171 *********** 2025-06-22 11:33:25.627575 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:33:25.628410 | orchestrator | 2025-06-22 11:33:25.632760 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-06-22 11:33:25.634526 | orchestrator | Sunday 22 June 2025 11:33:25 +0000 (0:00:00.048) 0:00:26.219 *********** 2025-06-22 11:33:26.117680 | orchestrator | changed: [testbed-manager] 2025-06-22 11:33:26.118597 | orchestrator | 2025-06-22 11:33:26.119007 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:33:26.119782 | orchestrator | 2025-06-22 11:33:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:33:26.120314 | orchestrator | 2025-06-22 11:33:26 | INFO  | Please wait and do not abort execution. 2025-06-22 11:33:26.121434 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 11:33:26.122379 | orchestrator | 2025-06-22 11:33:26.123116 | orchestrator | 2025-06-22 11:33:26.123954 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:33:26.124974 | orchestrator | Sunday 22 June 2025 11:33:26 +0000 (0:00:00.489) 0:00:26.708 *********** 2025-06-22 11:33:26.125761 | orchestrator | =============================================================================== 2025-06-22 11:33:26.126528 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.83s 2025-06-22 11:33:26.126858 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.17s 2025-06-22 11:33:26.127505 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-06-22 11:33:26.127928 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-06-22 11:33:26.128846 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-22 11:33:26.130608 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-22 11:33:26.131492 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-22 11:33:26.132869 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-22 11:33:26.134115 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-22 11:33:26.135062 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-22 11:33:26.136208 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-22 11:33:26.137144 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-22 11:33:26.138266 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-22 11:33:26.138969 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-06-22 11:33:26.139811 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-06-22 11:33:26.140820 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-06-22 11:33:26.141619 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.49s 2025-06-22 11:33:26.142261 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-06-22 11:33:26.142905 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-06-22 11:33:26.143794 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.14s 2025-06-22 11:33:26.610966 | orchestrator | + osism apply squid 2025-06-22 11:33:28.206736 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:33:28.206856 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:33:28.206896 | orchestrator | Registering Redlock._release_script 2025-06-22 11:33:28.260274 | orchestrator | 2025-06-22 11:33:28 | INFO  | Task 7f68f78b-ae86-4227-becf-dbbe892a5866 (squid) was prepared for execution. 2025-06-22 11:33:28.260343 | orchestrator | 2025-06-22 11:33:28 | INFO  | It takes a moment until task 7f68f78b-ae86-4227-becf-dbbe892a5866 (squid) has been started and output is visible here. 2025-06-22 11:33:32.125050 | orchestrator | 2025-06-22 11:33:32.125804 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-06-22 11:33:32.127039 | orchestrator | 2025-06-22 11:33:32.128078 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-06-22 11:33:32.128668 | orchestrator | Sunday 22 June 2025 11:33:32 +0000 (0:00:00.159) 0:00:00.159 *********** 2025-06-22 11:33:32.205376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 11:33:32.205878 | orchestrator | 2025-06-22 11:33:32.207365 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-06-22 11:33:32.208611 | orchestrator | Sunday 22 June 2025 11:33:32 +0000 (0:00:00.082) 0:00:00.242 *********** 2025-06-22 11:33:33.549777 | orchestrator | ok: [testbed-manager] 2025-06-22 11:33:33.549948 | orchestrator | 2025-06-22 11:33:33.550722 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-06-22 11:33:33.551351 | orchestrator | Sunday 22 June 2025 11:33:33 +0000 (0:00:01.341) 0:00:01.583 *********** 2025-06-22 11:33:34.702673 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-06-22 11:33:34.702789 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-06-22 11:33:34.702902 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-06-22 11:33:34.704152 | orchestrator | 2025-06-22 11:33:34.704595 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-06-22 11:33:34.705337 | orchestrator | Sunday 22 June 2025 11:33:34 +0000 (0:00:01.153) 0:00:02.737 *********** 2025-06-22 11:33:35.746345 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-06-22 11:33:35.748518 | orchestrator | 2025-06-22 11:33:35.749265 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-06-22 11:33:35.749816 | orchestrator | Sunday 22 June 2025 11:33:35 +0000 (0:00:01.045) 0:00:03.782 *********** 2025-06-22 11:33:36.101965 | orchestrator | ok: [testbed-manager] 2025-06-22 11:33:36.102118 | orchestrator | 2025-06-22 11:33:36.102135 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-06-22 11:33:36.102149 | orchestrator | Sunday 22 June 2025 11:33:36 +0000 (0:00:00.353) 0:00:04.136 *********** 2025-06-22 11:33:37.016803 | orchestrator | changed: [testbed-manager] 2025-06-22 11:33:37.017451 | orchestrator | 2025-06-22 11:33:37.018112 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-06-22 11:33:37.018627 | orchestrator | Sunday 22 June 2025 11:33:37 +0000 (0:00:00.917) 0:00:05.053 *********** 2025-06-22 11:34:08.062274 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-06-22 11:34:08.062395 | orchestrator | ok: [testbed-manager] 2025-06-22 11:34:08.062413 | orchestrator | 2025-06-22 11:34:08.062426 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-06-22 11:34:08.062439 | orchestrator | Sunday 22 June 2025 11:34:08 +0000 (0:00:31.039) 0:00:36.093 *********** 2025-06-22 11:34:19.943311 | orchestrator | changed: [testbed-manager] 2025-06-22 11:34:19.943432 | orchestrator | 2025-06-22 11:34:19.943450 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-06-22 11:34:19.943464 | orchestrator | Sunday 22 June 2025 11:34:19 +0000 (0:00:11.882) 0:00:47.975 *********** 2025-06-22 11:35:20.008342 | orchestrator | Pausing for 60 seconds 2025-06-22 11:35:20.008462 | orchestrator | changed: [testbed-manager] 2025-06-22 11:35:20.008479 | orchestrator | 2025-06-22 11:35:20.009717 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-06-22 11:35:20.009956 | orchestrator | Sunday 22 June 2025 11:35:20 +0000 (0:01:00.065) 0:01:48.041 *********** 2025-06-22 11:35:20.071834 | orchestrator | ok: [testbed-manager] 2025-06-22 11:35:20.072454 | orchestrator | 2025-06-22 11:35:20.073578 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-06-22 11:35:20.074265 | orchestrator | Sunday 22 June 2025 11:35:20 +0000 (0:00:00.068) 0:01:48.109 *********** 2025-06-22 11:35:20.676235 | orchestrator | changed: [testbed-manager] 2025-06-22 11:35:20.677307 | orchestrator | 2025-06-22 11:35:20.678154 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:35:20.678849 | orchestrator | 2025-06-22 11:35:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:35:20.679003 | orchestrator | 2025-06-22 11:35:20 | INFO  | Please wait and do not abort execution. 2025-06-22 11:35:20.680265 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:35:20.681122 | orchestrator | 2025-06-22 11:35:20.682213 | orchestrator | 2025-06-22 11:35:20.682869 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:35:20.683451 | orchestrator | Sunday 22 June 2025 11:35:20 +0000 (0:00:00.603) 0:01:48.713 *********** 2025-06-22 11:35:20.684026 | orchestrator | =============================================================================== 2025-06-22 11:35:20.684807 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-06-22 11:35:20.685682 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.04s 2025-06-22 11:35:20.686148 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.88s 2025-06-22 11:35:20.686689 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.34s 2025-06-22 11:35:20.687197 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.15s 2025-06-22 11:35:20.687737 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.05s 2025-06-22 11:35:20.688309 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.92s 2025-06-22 11:35:20.688846 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2025-06-22 11:35:20.689346 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2025-06-22 11:35:20.689780 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2025-06-22 11:35:20.690229 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-06-22 11:35:21.118676 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-22 11:35:21.118779 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-06-22 11:35:21.123538 | orchestrator | ++ semver 9.1.0 9.0.0 2025-06-22 11:35:21.178367 | orchestrator | + [[ 1 -lt 0 ]] 2025-06-22 11:35:21.179344 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-06-22 11:35:22.780182 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:35:22.780282 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:35:22.780296 | orchestrator | Registering Redlock._release_script 2025-06-22 11:35:22.837176 | orchestrator | 2025-06-22 11:35:22 | INFO  | Task d9d86d2e-504d-4961-b83a-65c6b31d8158 (operator) was prepared for execution. 2025-06-22 11:35:22.837277 | orchestrator | 2025-06-22 11:35:22 | INFO  | It takes a moment until task d9d86d2e-504d-4961-b83a-65c6b31d8158 (operator) has been started and output is visible here. 2025-06-22 11:35:26.672188 | orchestrator | 2025-06-22 11:35:26.672283 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-06-22 11:35:26.672299 | orchestrator | 2025-06-22 11:35:26.672311 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 11:35:26.673361 | orchestrator | Sunday 22 June 2025 11:35:26 +0000 (0:00:00.113) 0:00:00.113 *********** 2025-06-22 11:35:29.889579 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:35:29.890236 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:35:29.891131 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:35:29.891947 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:35:29.892723 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:35:29.893664 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:35:29.894428 | orchestrator | 2025-06-22 11:35:29.895347 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-06-22 11:35:29.895684 | orchestrator | Sunday 22 June 2025 11:35:29 +0000 (0:00:03.221) 0:00:03.335 *********** 2025-06-22 11:35:30.633843 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:35:30.634757 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:35:30.636385 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:35:30.636406 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:35:30.636770 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:35:30.637541 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:35:30.638446 | orchestrator | 2025-06-22 11:35:30.639281 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-06-22 11:35:30.639803 | orchestrator | 2025-06-22 11:35:30.640886 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-22 11:35:30.640904 | orchestrator | Sunday 22 June 2025 11:35:30 +0000 (0:00:00.744) 0:00:04.080 *********** 2025-06-22 11:35:30.690914 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:35:30.707318 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:35:30.724733 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:35:30.759140 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:35:30.760955 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:35:30.762522 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:35:30.763698 | orchestrator | 2025-06-22 11:35:30.764217 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-22 11:35:30.765086 | orchestrator | Sunday 22 June 2025 11:35:30 +0000 (0:00:00.125) 0:00:04.206 *********** 2025-06-22 11:35:30.843381 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:35:30.865528 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:35:30.902766 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:35:30.903885 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:35:30.904106 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:35:30.905100 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:35:30.906013 | orchestrator | 2025-06-22 11:35:30.906509 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-22 11:35:30.907371 | orchestrator | Sunday 22 June 2025 11:35:30 +0000 (0:00:00.143) 0:00:04.349 *********** 2025-06-22 11:35:31.538411 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:35:31.539519 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:35:31.539740 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:35:31.540856 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:35:31.541609 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:35:31.542505 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:35:31.542922 | orchestrator | 2025-06-22 11:35:31.543499 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-22 11:35:31.544298 | orchestrator | Sunday 22 June 2025 11:35:31 +0000 (0:00:00.633) 0:00:04.983 *********** 2025-06-22 11:35:32.499488 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:35:32.499811 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:35:32.501110 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:35:32.502182 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:35:32.502661 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:35:32.503464 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:35:32.504348 | orchestrator | 2025-06-22 11:35:32.505115 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-22 11:35:32.505664 | orchestrator | Sunday 22 June 2025 11:35:32 +0000 (0:00:00.959) 0:00:05.942 *********** 2025-06-22 11:35:33.656308 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-06-22 11:35:33.657414 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-06-22 11:35:33.657782 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-06-22 11:35:33.658821 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-06-22 11:35:33.660560 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-06-22 11:35:33.660581 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-06-22 11:35:33.661117 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-06-22 11:35:33.661820 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-06-22 11:35:33.662229 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-06-22 11:35:33.662848 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-06-22 11:35:33.663503 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-06-22 11:35:33.663878 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-06-22 11:35:33.664597 | orchestrator | 2025-06-22 11:35:33.665117 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-22 11:35:33.665487 | orchestrator | Sunday 22 June 2025 11:35:33 +0000 (0:00:01.158) 0:00:07.101 *********** 2025-06-22 11:35:34.878323 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:35:34.878880 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:35:34.879594 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:35:34.880563 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:35:34.882159 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:35:34.882885 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:35:34.883991 | orchestrator | 2025-06-22 11:35:34.884503 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-22 11:35:34.885111 | orchestrator | Sunday 22 June 2025 11:35:34 +0000 (0:00:01.220) 0:00:08.321 *********** 2025-06-22 11:35:36.042458 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-06-22 11:35:36.042545 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-06-22 11:35:36.043119 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-06-22 11:35:36.091534 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 11:35:36.091979 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 11:35:36.092960 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 11:35:36.093625 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 11:35:36.094476 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 11:35:36.097689 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 11:35:36.097882 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-06-22 11:35:36.099494 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-06-22 11:35:36.100205 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-06-22 11:35:36.101157 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-06-22 11:35:36.101843 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-06-22 11:35:36.102778 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-06-22 11:35:36.103003 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-06-22 11:35:36.103841 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-06-22 11:35:36.104665 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-06-22 11:35:36.104966 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-06-22 11:35:36.105654 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-06-22 11:35:36.106301 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-06-22 11:35:36.106926 | orchestrator | 2025-06-22 11:35:36.107275 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-22 11:35:36.107901 | orchestrator | Sunday 22 June 2025 11:35:36 +0000 (0:00:01.215) 0:00:09.537 *********** 2025-06-22 11:35:36.639742 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:35:36.640676 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:35:36.640918 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:35:36.642582 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:35:36.642618 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:35:36.643194 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:35:36.643673 | orchestrator | 2025-06-22 11:35:36.644591 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-22 11:35:36.644921 | orchestrator | Sunday 22 June 2025 11:35:36 +0000 (0:00:00.547) 0:00:10.085 *********** 2025-06-22 11:35:36.729490 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:35:36.756651 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:35:36.801163 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:35:36.802519 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:35:36.802568 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:35:36.803408 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:35:36.804120 | orchestrator | 2025-06-22 11:35:36.804750 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-22 11:35:36.805502 | orchestrator | Sunday 22 June 2025 11:35:36 +0000 (0:00:00.161) 0:00:10.246 *********** 2025-06-22 11:35:37.506338 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 11:35:37.507197 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 11:35:37.508984 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:35:37.510882 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:35:37.513916 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 11:35:37.516309 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:35:37.516330 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 11:35:37.516342 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:35:37.517318 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-22 11:35:37.518008 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-22 11:35:37.518687 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:35:37.519332 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:35:37.519876 | orchestrator | 2025-06-22 11:35:37.520604 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-22 11:35:37.521328 | orchestrator | Sunday 22 June 2025 11:35:37 +0000 (0:00:00.701) 0:00:10.948 *********** 2025-06-22 11:35:37.572274 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:35:37.594582 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:35:37.612934 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:35:37.640836 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:35:37.643168 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:35:37.643868 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:35:37.645088 | orchestrator | 2025-06-22 11:35:37.646202 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-22 11:35:37.646825 | orchestrator | Sunday 22 June 2025 11:35:37 +0000 (0:00:00.138) 0:00:11.087 *********** 2025-06-22 11:35:37.684642 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:35:37.704614 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:35:37.724956 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:35:37.744842 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:35:37.772764 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:35:37.773778 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:35:37.774591 | orchestrator | 2025-06-22 11:35:37.775413 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-22 11:35:37.776205 | orchestrator | Sunday 22 June 2025 11:35:37 +0000 (0:00:00.131) 0:00:11.219 *********** 2025-06-22 11:35:37.842220 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:35:37.869853 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:35:37.893379 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:35:37.925618 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:35:37.927116 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:35:37.927312 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:35:37.928201 | orchestrator | 2025-06-22 11:35:37.929171 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-22 11:35:37.929688 | orchestrator | Sunday 22 June 2025 11:35:37 +0000 (0:00:00.152) 0:00:11.371 *********** 2025-06-22 11:35:38.588114 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:35:38.588877 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:35:38.590117 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:35:38.590963 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:35:38.591618 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:35:38.592350 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:35:38.592916 | orchestrator | 2025-06-22 11:35:38.593651 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-22 11:35:38.594097 | orchestrator | Sunday 22 June 2025 11:35:38 +0000 (0:00:00.662) 0:00:12.033 *********** 2025-06-22 11:35:38.673989 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:35:38.693414 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:35:38.787665 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:35:38.788505 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:35:38.790114 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:35:38.790682 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:35:38.791781 | orchestrator | 2025-06-22 11:35:38.792648 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:35:38.793103 | orchestrator | 2025-06-22 11:35:38 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:35:38.793320 | orchestrator | 2025-06-22 11:35:38 | INFO  | Please wait and do not abort execution. 2025-06-22 11:35:38.794192 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 11:35:38.794632 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 11:35:38.795212 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 11:35:38.795797 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 11:35:38.796418 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 11:35:38.797057 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 11:35:38.797552 | orchestrator | 2025-06-22 11:35:38.797983 | orchestrator | 2025-06-22 11:35:38.798492 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:35:38.798845 | orchestrator | Sunday 22 June 2025 11:35:38 +0000 (0:00:00.200) 0:00:12.234 *********** 2025-06-22 11:35:38.799324 | orchestrator | =============================================================================== 2025-06-22 11:35:38.799742 | orchestrator | Gathering Facts --------------------------------------------------------- 3.22s 2025-06-22 11:35:38.800085 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.22s 2025-06-22 11:35:38.800512 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.22s 2025-06-22 11:35:38.800992 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.16s 2025-06-22 11:35:38.801365 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.96s 2025-06-22 11:35:38.801754 | orchestrator | Do not require tty for all users ---------------------------------------- 0.74s 2025-06-22 11:35:38.802117 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2025-06-22 11:35:38.802477 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2025-06-22 11:35:38.802799 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.63s 2025-06-22 11:35:38.803264 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.55s 2025-06-22 11:35:38.803597 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.20s 2025-06-22 11:35:38.804037 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2025-06-22 11:35:38.804720 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-06-22 11:35:38.804900 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.14s 2025-06-22 11:35:38.805161 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-06-22 11:35:38.806236 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2025-06-22 11:35:38.806768 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.13s 2025-06-22 11:35:39.235161 | orchestrator | + osism apply --environment custom facts 2025-06-22 11:35:40.986357 | orchestrator | 2025-06-22 11:35:40 | INFO  | Trying to run play facts in environment custom 2025-06-22 11:35:40.991473 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:35:40.991687 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:35:40.991708 | orchestrator | Registering Redlock._release_script 2025-06-22 11:35:41.053671 | orchestrator | 2025-06-22 11:35:41 | INFO  | Task 690a37d9-c562-4540-8194-22de033eeebe (facts) was prepared for execution. 2025-06-22 11:35:41.053757 | orchestrator | 2025-06-22 11:35:41 | INFO  | It takes a moment until task 690a37d9-c562-4540-8194-22de033eeebe (facts) has been started and output is visible here. 2025-06-22 11:35:44.814912 | orchestrator | 2025-06-22 11:35:44.816716 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-06-22 11:35:44.818618 | orchestrator | 2025-06-22 11:35:44.821342 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-22 11:35:44.821775 | orchestrator | Sunday 22 June 2025 11:35:44 +0000 (0:00:00.077) 0:00:00.077 *********** 2025-06-22 11:35:46.189836 | orchestrator | ok: [testbed-manager] 2025-06-22 11:35:46.190762 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:35:46.191984 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:35:46.192276 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:35:46.193481 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:35:46.194213 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:35:46.195140 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:35:46.195766 | orchestrator | 2025-06-22 11:35:46.196416 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-06-22 11:35:46.197217 | orchestrator | Sunday 22 June 2025 11:35:46 +0000 (0:00:01.375) 0:00:01.453 *********** 2025-06-22 11:35:47.337712 | orchestrator | ok: [testbed-manager] 2025-06-22 11:35:47.337867 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:35:47.337954 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:35:47.338933 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:35:47.338955 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:35:47.339340 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:35:47.339549 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:35:47.340032 | orchestrator | 2025-06-22 11:35:47.340510 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-06-22 11:35:47.340782 | orchestrator | 2025-06-22 11:35:47.341105 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-22 11:35:47.341317 | orchestrator | Sunday 22 June 2025 11:35:47 +0000 (0:00:01.150) 0:00:02.603 *********** 2025-06-22 11:35:47.441804 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:35:47.442297 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:35:47.443254 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:35:47.444227 | orchestrator | 2025-06-22 11:35:47.444698 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-22 11:35:47.445712 | orchestrator | Sunday 22 June 2025 11:35:47 +0000 (0:00:00.103) 0:00:02.707 *********** 2025-06-22 11:35:47.628086 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:35:47.628534 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:35:47.629334 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:35:47.630064 | orchestrator | 2025-06-22 11:35:47.630647 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-22 11:35:47.631189 | orchestrator | Sunday 22 June 2025 11:35:47 +0000 (0:00:00.186) 0:00:02.894 *********** 2025-06-22 11:35:47.828798 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:35:47.828879 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:35:47.828891 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:35:47.828902 | orchestrator | 2025-06-22 11:35:47.829286 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-22 11:35:47.829307 | orchestrator | Sunday 22 June 2025 11:35:47 +0000 (0:00:00.199) 0:00:03.093 *********** 2025-06-22 11:35:47.946377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 11:35:47.946543 | orchestrator | 2025-06-22 11:35:47.946913 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-22 11:35:47.947427 | orchestrator | Sunday 22 June 2025 11:35:47 +0000 (0:00:00.119) 0:00:03.213 *********** 2025-06-22 11:35:48.349904 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:35:48.350495 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:35:48.351814 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:35:48.352677 | orchestrator | 2025-06-22 11:35:48.353789 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-22 11:35:48.354725 | orchestrator | Sunday 22 June 2025 11:35:48 +0000 (0:00:00.402) 0:00:03.615 *********** 2025-06-22 11:35:48.456721 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:35:48.457570 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:35:48.460483 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:35:48.460508 | orchestrator | 2025-06-22 11:35:48.461427 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-22 11:35:48.461807 | orchestrator | Sunday 22 June 2025 11:35:48 +0000 (0:00:00.107) 0:00:03.722 *********** 2025-06-22 11:35:49.475057 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:35:49.475247 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:35:49.476557 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:35:49.477486 | orchestrator | 2025-06-22 11:35:49.478266 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-22 11:35:49.479135 | orchestrator | Sunday 22 June 2025 11:35:49 +0000 (0:00:01.015) 0:00:04.738 *********** 2025-06-22 11:35:49.939247 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:35:49.939799 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:35:49.940752 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:35:49.941744 | orchestrator | 2025-06-22 11:35:49.942352 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-22 11:35:49.942890 | orchestrator | Sunday 22 June 2025 11:35:49 +0000 (0:00:00.462) 0:00:05.200 *********** 2025-06-22 11:35:50.973120 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:35:50.973921 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:35:50.974429 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:35:50.975619 | orchestrator | 2025-06-22 11:35:50.976037 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-22 11:35:50.976797 | orchestrator | Sunday 22 June 2025 11:35:50 +0000 (0:00:01.036) 0:00:06.237 *********** 2025-06-22 11:36:04.472265 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:36:04.472385 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:36:04.472401 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:36:04.472415 | orchestrator | 2025-06-22 11:36:04.472428 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-06-22 11:36:04.472440 | orchestrator | Sunday 22 June 2025 11:36:04 +0000 (0:00:13.494) 0:00:19.731 *********** 2025-06-22 11:36:04.598821 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:36:04.598916 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:36:04.602573 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:36:04.602841 | orchestrator | 2025-06-22 11:36:04.603576 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-06-22 11:36:04.604206 | orchestrator | Sunday 22 June 2025 11:36:04 +0000 (0:00:00.131) 0:00:19.863 *********** 2025-06-22 11:36:12.204208 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:36:12.204725 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:36:12.205292 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:36:12.207459 | orchestrator | 2025-06-22 11:36:12.208480 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-22 11:36:12.209093 | orchestrator | Sunday 22 June 2025 11:36:12 +0000 (0:00:07.604) 0:00:27.468 *********** 2025-06-22 11:36:12.633138 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:36:12.634697 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:36:12.635688 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:36:12.636736 | orchestrator | 2025-06-22 11:36:12.637375 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-22 11:36:12.638235 | orchestrator | Sunday 22 June 2025 11:36:12 +0000 (0:00:00.429) 0:00:27.897 *********** 2025-06-22 11:36:16.199046 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-06-22 11:36:16.199527 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-06-22 11:36:16.202347 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-06-22 11:36:16.203541 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-06-22 11:36:16.204529 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-06-22 11:36:16.205789 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-06-22 11:36:16.206883 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-06-22 11:36:16.207913 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-06-22 11:36:16.208808 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-06-22 11:36:16.209736 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-06-22 11:36:16.210921 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-06-22 11:36:16.211460 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-06-22 11:36:16.212268 | orchestrator | 2025-06-22 11:36:16.213526 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-22 11:36:16.214241 | orchestrator | Sunday 22 June 2025 11:36:16 +0000 (0:00:03.564) 0:00:31.462 *********** 2025-06-22 11:36:17.434719 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:36:17.434832 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:36:17.436383 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:36:17.438314 | orchestrator | 2025-06-22 11:36:17.439445 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 11:36:17.440822 | orchestrator | 2025-06-22 11:36:17.441378 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 11:36:17.443088 | orchestrator | Sunday 22 June 2025 11:36:17 +0000 (0:00:01.236) 0:00:32.698 *********** 2025-06-22 11:36:21.309080 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:36:21.309621 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:36:21.312481 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:36:21.313612 | orchestrator | ok: [testbed-manager] 2025-06-22 11:36:21.314954 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:36:21.316074 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:36:21.316633 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:36:21.317947 | orchestrator | 2025-06-22 11:36:21.318498 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:36:21.319144 | orchestrator | 2025-06-22 11:36:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:36:21.319166 | orchestrator | 2025-06-22 11:36:21 | INFO  | Please wait and do not abort execution. 2025-06-22 11:36:21.320112 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:36:21.320564 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:36:21.321339 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:36:21.322229 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:36:21.323226 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:36:21.323851 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:36:21.324468 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:36:21.324802 | orchestrator | 2025-06-22 11:36:21.325558 | orchestrator | 2025-06-22 11:36:21.326118 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:36:21.326650 | orchestrator | Sunday 22 June 2025 11:36:21 +0000 (0:00:03.874) 0:00:36.573 *********** 2025-06-22 11:36:21.327457 | orchestrator | =============================================================================== 2025-06-22 11:36:21.328140 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.49s 2025-06-22 11:36:21.328580 | orchestrator | Install required packages (Debian) -------------------------------------- 7.60s 2025-06-22 11:36:21.329104 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.88s 2025-06-22 11:36:21.329524 | orchestrator | Copy fact files --------------------------------------------------------- 3.56s 2025-06-22 11:36:21.330132 | orchestrator | Create custom facts directory ------------------------------------------- 1.38s 2025-06-22 11:36:21.330529 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.24s 2025-06-22 11:36:21.331169 | orchestrator | Copy fact file ---------------------------------------------------------- 1.15s 2025-06-22 11:36:21.331559 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.04s 2025-06-22 11:36:21.331945 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.02s 2025-06-22 11:36:21.332429 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-06-22 11:36:21.332755 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2025-06-22 11:36:21.333210 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.40s 2025-06-22 11:36:21.333590 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2025-06-22 11:36:21.333947 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2025-06-22 11:36:21.334412 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.13s 2025-06-22 11:36:21.334848 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2025-06-22 11:36:21.335273 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-06-22 11:36:21.335631 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-06-22 11:36:21.799700 | orchestrator | + osism apply bootstrap 2025-06-22 11:36:23.445472 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:36:23.445581 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:36:23.445597 | orchestrator | Registering Redlock._release_script 2025-06-22 11:36:23.513316 | orchestrator | 2025-06-22 11:36:23 | INFO  | Task 8ba80b95-b0db-469f-830b-5403e9854f0a (bootstrap) was prepared for execution. 2025-06-22 11:36:23.513412 | orchestrator | 2025-06-22 11:36:23 | INFO  | It takes a moment until task 8ba80b95-b0db-469f-830b-5403e9854f0a (bootstrap) has been started and output is visible here. 2025-06-22 11:36:27.476357 | orchestrator | 2025-06-22 11:36:27.477102 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-06-22 11:36:27.479315 | orchestrator | 2025-06-22 11:36:27.480430 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-06-22 11:36:27.480681 | orchestrator | Sunday 22 June 2025 11:36:27 +0000 (0:00:00.157) 0:00:00.157 *********** 2025-06-22 11:36:27.553597 | orchestrator | ok: [testbed-manager] 2025-06-22 11:36:27.571283 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:36:27.593355 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:36:27.635812 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:36:27.726163 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:36:27.726335 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:36:27.726829 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:36:27.727586 | orchestrator | 2025-06-22 11:36:27.728407 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 11:36:27.730667 | orchestrator | 2025-06-22 11:36:27.731166 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 11:36:27.732185 | orchestrator | Sunday 22 June 2025 11:36:27 +0000 (0:00:00.254) 0:00:00.411 *********** 2025-06-22 11:36:31.451234 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:36:31.451788 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:36:31.452354 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:36:31.453157 | orchestrator | ok: [testbed-manager] 2025-06-22 11:36:31.454223 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:36:31.455235 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:36:31.455846 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:36:31.456945 | orchestrator | 2025-06-22 11:36:31.458387 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-06-22 11:36:31.459067 | orchestrator | 2025-06-22 11:36:31.460110 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 11:36:31.460921 | orchestrator | Sunday 22 June 2025 11:36:31 +0000 (0:00:03.722) 0:00:04.134 *********** 2025-06-22 11:36:31.547430 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-22 11:36:31.547530 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-22 11:36:31.589170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-06-22 11:36:31.590161 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-22 11:36:31.591073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 11:36:31.594675 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-06-22 11:36:31.594711 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-22 11:36:31.594723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 11:36:31.652250 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-22 11:36:31.652883 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-06-22 11:36:31.653312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 11:36:31.653759 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-06-22 11:36:31.654229 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-22 11:36:31.654582 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 11:36:31.655069 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-22 11:36:31.655326 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-06-22 11:36:31.655811 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 11:36:31.656132 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-06-22 11:36:31.885358 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-06-22 11:36:31.885956 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:36:31.886286 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-06-22 11:36:31.886820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 11:36:31.887295 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-22 11:36:31.888012 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:36:31.888716 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-22 11:36:31.888915 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-06-22 11:36:31.889819 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-06-22 11:36:31.890282 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-22 11:36:31.890775 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-06-22 11:36:31.891480 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-22 11:36:31.892228 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-06-22 11:36:31.892705 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-22 11:36:31.893183 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-06-22 11:36:31.893463 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:36:31.894198 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-06-22 11:36:31.894627 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-22 11:36:31.894946 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-06-22 11:36:31.895710 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-06-22 11:36:31.896036 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-22 11:36:31.896903 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 11:36:31.897012 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-22 11:36:31.897096 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-06-22 11:36:31.897534 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-22 11:36:31.898147 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 11:36:31.898789 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-22 11:36:31.898837 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-06-22 11:36:31.898930 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-22 11:36:31.899250 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 11:36:31.899617 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:36:31.901348 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:36:31.901379 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-22 11:36:31.901390 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:36:31.901402 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-22 11:36:31.901436 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-22 11:36:31.901519 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-22 11:36:31.902305 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:36:31.902655 | orchestrator | 2025-06-22 11:36:31.902959 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-06-22 11:36:31.903281 | orchestrator | 2025-06-22 11:36:31.903592 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-06-22 11:36:31.903902 | orchestrator | Sunday 22 June 2025 11:36:31 +0000 (0:00:00.436) 0:00:04.570 *********** 2025-06-22 11:36:33.203337 | orchestrator | ok: [testbed-manager] 2025-06-22 11:36:33.204122 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:36:33.204163 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:36:33.204250 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:36:33.204941 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:36:33.205059 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:36:33.205358 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:36:33.205671 | orchestrator | 2025-06-22 11:36:33.206013 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-06-22 11:36:33.206410 | orchestrator | Sunday 22 June 2025 11:36:33 +0000 (0:00:01.316) 0:00:05.887 *********** 2025-06-22 11:36:34.388727 | orchestrator | ok: [testbed-manager] 2025-06-22 11:36:34.388910 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:36:34.389782 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:36:34.393595 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:36:34.393640 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:36:34.393656 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:36:34.393667 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:36:34.393678 | orchestrator | 2025-06-22 11:36:34.393691 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-06-22 11:36:34.394131 | orchestrator | Sunday 22 June 2025 11:36:34 +0000 (0:00:01.184) 0:00:07.072 *********** 2025-06-22 11:36:34.670709 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:36:34.671430 | orchestrator | 2025-06-22 11:36:34.671841 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-06-22 11:36:34.672505 | orchestrator | Sunday 22 June 2025 11:36:34 +0000 (0:00:00.282) 0:00:07.354 *********** 2025-06-22 11:36:36.625611 | orchestrator | changed: [testbed-manager] 2025-06-22 11:36:36.628822 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:36:36.629414 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:36:36.630105 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:36:36.630566 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:36:36.631300 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:36:36.631941 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:36:36.632911 | orchestrator | 2025-06-22 11:36:36.633551 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-06-22 11:36:36.633986 | orchestrator | Sunday 22 June 2025 11:36:36 +0000 (0:00:01.954) 0:00:09.308 *********** 2025-06-22 11:36:36.706916 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:36:36.900290 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:36:36.900470 | orchestrator | 2025-06-22 11:36:36.900895 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-06-22 11:36:36.901482 | orchestrator | Sunday 22 June 2025 11:36:36 +0000 (0:00:00.276) 0:00:09.585 *********** 2025-06-22 11:36:37.890898 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:36:37.891420 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:36:37.891991 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:36:37.893252 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:36:37.894116 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:36:37.894664 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:36:37.895001 | orchestrator | 2025-06-22 11:36:37.895720 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-06-22 11:36:37.895944 | orchestrator | Sunday 22 June 2025 11:36:37 +0000 (0:00:00.986) 0:00:10.572 *********** 2025-06-22 11:36:37.973031 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:36:38.450912 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:36:38.451056 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:36:38.451635 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:36:38.452504 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:36:38.453137 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:36:38.454558 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:36:38.454579 | orchestrator | 2025-06-22 11:36:38.455660 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-06-22 11:36:38.455702 | orchestrator | Sunday 22 June 2025 11:36:38 +0000 (0:00:00.561) 0:00:11.134 *********** 2025-06-22 11:36:38.603550 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:36:38.632314 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:36:38.661441 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:36:38.870140 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:36:38.870337 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:36:38.871516 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:36:38.872898 | orchestrator | ok: [testbed-manager] 2025-06-22 11:36:38.873673 | orchestrator | 2025-06-22 11:36:38.874891 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-22 11:36:38.875831 | orchestrator | Sunday 22 June 2025 11:36:38 +0000 (0:00:00.418) 0:00:11.552 *********** 2025-06-22 11:36:38.950338 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:36:38.976500 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:36:39.004476 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:36:39.036738 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:36:39.107441 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:36:39.109507 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:36:39.113433 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:36:39.113488 | orchestrator | 2025-06-22 11:36:39.113501 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-22 11:36:39.113514 | orchestrator | Sunday 22 June 2025 11:36:39 +0000 (0:00:00.239) 0:00:11.792 *********** 2025-06-22 11:36:39.403084 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:36:39.404150 | orchestrator | 2025-06-22 11:36:39.405654 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-22 11:36:39.406235 | orchestrator | Sunday 22 June 2025 11:36:39 +0000 (0:00:00.295) 0:00:12.087 *********** 2025-06-22 11:36:39.713623 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:36:39.719847 | orchestrator | 2025-06-22 11:36:39.719909 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-22 11:36:39.719923 | orchestrator | Sunday 22 June 2025 11:36:39 +0000 (0:00:00.310) 0:00:12.398 *********** 2025-06-22 11:36:40.995104 | orchestrator | ok: [testbed-manager] 2025-06-22 11:36:40.996123 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:36:40.997020 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:36:40.998464 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:36:40.999415 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:36:41.000389 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:36:41.001267 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:36:41.001784 | orchestrator | 2025-06-22 11:36:41.002783 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-22 11:36:41.003172 | orchestrator | Sunday 22 June 2025 11:36:40 +0000 (0:00:01.279) 0:00:13.677 *********** 2025-06-22 11:36:41.075397 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:36:41.102713 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:36:41.138103 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:36:41.167413 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:36:41.221766 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:36:41.221911 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:36:41.224039 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:36:41.225538 | orchestrator | 2025-06-22 11:36:41.225883 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-22 11:36:41.227061 | orchestrator | Sunday 22 June 2025 11:36:41 +0000 (0:00:00.228) 0:00:13.905 *********** 2025-06-22 11:36:41.756848 | orchestrator | ok: [testbed-manager] 2025-06-22 11:36:41.757091 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:36:41.757621 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:36:41.760796 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:36:41.760875 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:36:41.763516 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:36:41.764469 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:36:41.765215 | orchestrator | 2025-06-22 11:36:41.766167 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-22 11:36:41.767053 | orchestrator | Sunday 22 June 2025 11:36:41 +0000 (0:00:00.533) 0:00:14.439 *********** 2025-06-22 11:36:41.867257 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:36:41.898486 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:36:41.916649 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:36:41.989530 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:36:41.990615 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:36:41.993293 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:36:41.993328 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:36:41.993341 | orchestrator | 2025-06-22 11:36:41.993858 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-22 11:36:41.995441 | orchestrator | Sunday 22 June 2025 11:36:41 +0000 (0:00:00.234) 0:00:14.674 *********** 2025-06-22 11:36:42.506406 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:36:42.507590 | orchestrator | ok: [testbed-manager] 2025-06-22 11:36:42.508205 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:36:42.509255 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:36:42.509748 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:36:42.510501 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:36:42.511296 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:36:42.511760 | orchestrator | 2025-06-22 11:36:42.512444 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-22 11:36:42.513166 | orchestrator | Sunday 22 June 2025 11:36:42 +0000 (0:00:00.514) 0:00:15.189 *********** 2025-06-22 11:36:43.633180 | orchestrator | ok: [testbed-manager] 2025-06-22 11:36:43.635295 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:36:43.636853 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:36:43.637872 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:36:43.638799 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:36:43.639703 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:36:43.640286 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:36:43.641157 | orchestrator | 2025-06-22 11:36:43.641809 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-22 11:36:43.642770 | orchestrator | Sunday 22 June 2025 11:36:43 +0000 (0:00:01.125) 0:00:16.315 *********** 2025-06-22 11:36:44.782294 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:36:44.782724 | orchestrator | ok: [testbed-manager] 2025-06-22 11:36:44.783798 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:36:44.784801 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:36:44.785770 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:36:44.787589 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:36:44.788343 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:36:44.789371 | orchestrator | 2025-06-22 11:36:44.790094 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-22 11:36:44.790832 | orchestrator | Sunday 22 June 2025 11:36:44 +0000 (0:00:01.150) 0:00:17.465 *********** 2025-06-22 11:36:45.185819 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:36:45.185922 | orchestrator | 2025-06-22 11:36:45.185938 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-22 11:36:45.186561 | orchestrator | Sunday 22 June 2025 11:36:45 +0000 (0:00:00.402) 0:00:17.868 *********** 2025-06-22 11:36:45.271578 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:36:46.470306 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:36:46.472673 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:36:46.474652 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:36:46.475923 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:36:46.476996 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:36:46.478834 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:36:46.479788 | orchestrator | 2025-06-22 11:36:46.480722 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-22 11:36:46.481688 | orchestrator | Sunday 22 June 2025 11:36:46 +0000 (0:00:01.284) 0:00:19.152 *********** 2025-06-22 11:36:46.553539 | orchestrator | ok: [testbed-manager] 2025-06-22 11:36:46.585624 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:36:46.603624 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:36:46.636199 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:36:46.693157 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:36:46.693817 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:36:46.694613 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:36:46.696512 | orchestrator | 2025-06-22 11:36:46.697407 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-22 11:36:46.697889 | orchestrator | Sunday 22 June 2025 11:36:46 +0000 (0:00:00.224) 0:00:19.377 *********** 2025-06-22 11:36:46.779459 | orchestrator | ok: [testbed-manager] 2025-06-22 11:36:46.802256 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:36:46.828589 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:36:46.855509 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:36:46.917398 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:36:46.917468 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:36:46.917656 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:36:46.919556 | orchestrator | 2025-06-22 11:36:46.920624 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-22 11:36:46.921701 | orchestrator | Sunday 22 June 2025 11:36:46 +0000 (0:00:00.224) 0:00:19.601 *********** 2025-06-22 11:36:46.992099 | orchestrator | ok: [testbed-manager] 2025-06-22 11:36:47.017445 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:36:47.042575 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:36:47.067619 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:36:47.119752 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:36:47.120111 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:36:47.121371 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:36:47.122428 | orchestrator | 2025-06-22 11:36:47.122872 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-22 11:36:47.124047 | orchestrator | Sunday 22 June 2025 11:36:47 +0000 (0:00:00.203) 0:00:19.804 *********** 2025-06-22 11:36:47.427055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:36:47.427915 | orchestrator | 2025-06-22 11:36:47.429420 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-22 11:36:47.430134 | orchestrator | Sunday 22 June 2025 11:36:47 +0000 (0:00:00.306) 0:00:20.111 *********** 2025-06-22 11:36:48.038400 | orchestrator | ok: [testbed-manager] 2025-06-22 11:36:48.038623 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:36:48.040849 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:36:48.041447 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:36:48.042901 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:36:48.043458 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:36:48.044711 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:36:48.045383 | orchestrator | 2025-06-22 11:36:48.046119 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-22 11:36:48.047049 | orchestrator | Sunday 22 June 2025 11:36:48 +0000 (0:00:00.609) 0:00:20.720 *********** 2025-06-22 11:36:48.128699 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:36:48.157257 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:36:48.183272 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:36:48.213902 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:36:48.280532 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:36:48.280684 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:36:48.281014 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:36:48.281410 | orchestrator | 2025-06-22 11:36:48.282929 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-22 11:36:48.283605 | orchestrator | Sunday 22 June 2025 11:36:48 +0000 (0:00:00.245) 0:00:20.966 *********** 2025-06-22 11:36:49.377138 | orchestrator | ok: [testbed-manager] 2025-06-22 11:36:49.377257 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:36:49.377339 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:36:49.378295 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:36:49.380046 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:36:49.381040 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:36:49.381665 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:36:49.382755 | orchestrator | 2025-06-22 11:36:49.383886 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-22 11:36:49.384766 | orchestrator | Sunday 22 June 2025 11:36:49 +0000 (0:00:01.092) 0:00:22.058 *********** 2025-06-22 11:36:49.965145 | orchestrator | ok: [testbed-manager] 2025-06-22 11:36:49.965743 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:36:49.967068 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:36:49.968152 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:36:49.969272 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:36:49.970331 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:36:49.970880 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:36:49.971811 | orchestrator | 2025-06-22 11:36:49.972509 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-22 11:36:49.973657 | orchestrator | Sunday 22 June 2025 11:36:49 +0000 (0:00:00.589) 0:00:22.648 *********** 2025-06-22 11:36:51.107166 | orchestrator | ok: [testbed-manager] 2025-06-22 11:36:51.107980 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:36:51.109014 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:36:51.109338 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:36:51.110284 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:36:51.111112 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:36:51.111473 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:36:51.112325 | orchestrator | 2025-06-22 11:36:51.113025 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-22 11:36:51.113575 | orchestrator | Sunday 22 June 2025 11:36:51 +0000 (0:00:01.141) 0:00:23.790 *********** 2025-06-22 11:37:04.846480 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:37:04.846600 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:37:04.846615 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:37:04.846848 | orchestrator | changed: [testbed-manager] 2025-06-22 11:37:04.847933 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:37:04.851037 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:37:04.852025 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:37:04.852884 | orchestrator | 2025-06-22 11:37:04.853927 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-06-22 11:37:04.854553 | orchestrator | Sunday 22 June 2025 11:37:04 +0000 (0:00:13.734) 0:00:37.524 *********** 2025-06-22 11:37:04.924787 | orchestrator | ok: [testbed-manager] 2025-06-22 11:37:04.955785 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:37:04.984503 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:37:05.014846 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:37:05.086321 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:37:05.087200 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:37:05.088206 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:37:05.088374 | orchestrator | 2025-06-22 11:37:05.089004 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-06-22 11:37:05.089755 | orchestrator | Sunday 22 June 2025 11:37:05 +0000 (0:00:00.247) 0:00:37.771 *********** 2025-06-22 11:37:05.167133 | orchestrator | ok: [testbed-manager] 2025-06-22 11:37:05.200477 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:37:05.223930 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:37:05.253682 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:37:05.320689 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:37:05.322211 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:37:05.322545 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:37:05.324365 | orchestrator | 2025-06-22 11:37:05.325064 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-06-22 11:37:05.327005 | orchestrator | Sunday 22 June 2025 11:37:05 +0000 (0:00:00.233) 0:00:38.005 *********** 2025-06-22 11:37:05.401615 | orchestrator | ok: [testbed-manager] 2025-06-22 11:37:05.430728 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:37:05.457444 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:37:05.486926 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:37:05.569170 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:37:05.569325 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:37:05.569941 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:37:05.570501 | orchestrator | 2025-06-22 11:37:05.572681 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-06-22 11:37:05.572719 | orchestrator | Sunday 22 June 2025 11:37:05 +0000 (0:00:00.248) 0:00:38.253 *********** 2025-06-22 11:37:05.858906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:37:05.859404 | orchestrator | 2025-06-22 11:37:05.865223 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-06-22 11:37:05.865254 | orchestrator | Sunday 22 June 2025 11:37:05 +0000 (0:00:00.288) 0:00:38.542 *********** 2025-06-22 11:37:07.459624 | orchestrator | ok: [testbed-manager] 2025-06-22 11:37:07.460426 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:37:07.461499 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:37:07.462137 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:37:07.463414 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:37:07.464692 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:37:07.465561 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:37:07.466115 | orchestrator | 2025-06-22 11:37:07.467345 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-06-22 11:37:07.468173 | orchestrator | Sunday 22 June 2025 11:37:07 +0000 (0:00:01.600) 0:00:40.142 *********** 2025-06-22 11:37:08.482857 | orchestrator | changed: [testbed-manager] 2025-06-22 11:37:08.483068 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:37:08.484006 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:37:08.484256 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:37:08.486421 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:37:08.486465 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:37:08.486477 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:37:08.486867 | orchestrator | 2025-06-22 11:37:08.487347 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-06-22 11:37:08.487755 | orchestrator | Sunday 22 June 2025 11:37:08 +0000 (0:00:01.024) 0:00:41.166 *********** 2025-06-22 11:37:09.275231 | orchestrator | ok: [testbed-manager] 2025-06-22 11:37:09.275439 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:37:09.275455 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:37:09.276003 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:37:09.276348 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:37:09.278578 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:37:09.278616 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:37:09.278813 | orchestrator | 2025-06-22 11:37:09.279044 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-06-22 11:37:09.279892 | orchestrator | Sunday 22 June 2025 11:37:09 +0000 (0:00:00.791) 0:00:41.958 *********** 2025-06-22 11:37:09.580198 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:37:09.581190 | orchestrator | 2025-06-22 11:37:09.587404 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-06-22 11:37:09.587432 | orchestrator | Sunday 22 June 2025 11:37:09 +0000 (0:00:00.305) 0:00:42.264 *********** 2025-06-22 11:37:10.619706 | orchestrator | changed: [testbed-manager] 2025-06-22 11:37:10.620665 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:37:10.625499 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:37:10.625559 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:37:10.625579 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:37:10.627565 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:37:10.627866 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:37:10.628508 | orchestrator | 2025-06-22 11:37:10.628788 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-06-22 11:37:10.629348 | orchestrator | Sunday 22 June 2025 11:37:10 +0000 (0:00:01.038) 0:00:43.302 *********** 2025-06-22 11:37:10.734724 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:37:10.768711 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:37:10.796512 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:37:10.936665 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:37:10.937221 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:37:10.941326 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:37:10.941350 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:37:10.941362 | orchestrator | 2025-06-22 11:37:10.942404 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-06-22 11:37:10.943678 | orchestrator | Sunday 22 June 2025 11:37:10 +0000 (0:00:00.317) 0:00:43.620 *********** 2025-06-22 11:37:22.549107 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:37:22.549225 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:37:22.549241 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:37:22.549253 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:37:22.549264 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:37:22.549334 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:37:22.549700 | orchestrator | changed: [testbed-manager] 2025-06-22 11:37:22.551371 | orchestrator | 2025-06-22 11:37:22.551394 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-06-22 11:37:22.551407 | orchestrator | Sunday 22 June 2025 11:37:22 +0000 (0:00:11.609) 0:00:55.229 *********** 2025-06-22 11:37:23.654289 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:37:23.654424 | orchestrator | ok: [testbed-manager] 2025-06-22 11:37:23.654994 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:37:23.655800 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:37:23.656698 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:37:23.658509 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:37:23.659071 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:37:23.660019 | orchestrator | 2025-06-22 11:37:23.661155 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-06-22 11:37:23.661755 | orchestrator | Sunday 22 June 2025 11:37:23 +0000 (0:00:01.107) 0:00:56.337 *********** 2025-06-22 11:37:24.513873 | orchestrator | ok: [testbed-manager] 2025-06-22 11:37:24.514168 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:37:24.514882 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:37:24.515856 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:37:24.516229 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:37:24.516804 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:37:24.517479 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:37:24.518853 | orchestrator | 2025-06-22 11:37:24.519025 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-06-22 11:37:24.519643 | orchestrator | Sunday 22 June 2025 11:37:24 +0000 (0:00:00.860) 0:00:57.198 *********** 2025-06-22 11:37:24.573488 | orchestrator | ok: [testbed-manager] 2025-06-22 11:37:24.628761 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:37:24.662491 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:37:24.688367 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:37:24.767849 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:37:24.769196 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:37:24.773324 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:37:24.777214 | orchestrator | 2025-06-22 11:37:24.778376 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-06-22 11:37:24.779198 | orchestrator | Sunday 22 June 2025 11:37:24 +0000 (0:00:00.252) 0:00:57.450 *********** 2025-06-22 11:37:24.845926 | orchestrator | ok: [testbed-manager] 2025-06-22 11:37:24.874334 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:37:24.902393 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:37:24.927937 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:37:24.988919 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:37:24.989167 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:37:24.989460 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:37:24.990085 | orchestrator | 2025-06-22 11:37:24.990282 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-06-22 11:37:24.991043 | orchestrator | Sunday 22 June 2025 11:37:24 +0000 (0:00:00.223) 0:00:57.674 *********** 2025-06-22 11:37:25.329831 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:37:25.330106 | orchestrator | 2025-06-22 11:37:25.330850 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-06-22 11:37:25.334232 | orchestrator | Sunday 22 June 2025 11:37:25 +0000 (0:00:00.339) 0:00:58.013 *********** 2025-06-22 11:37:26.967855 | orchestrator | ok: [testbed-manager] 2025-06-22 11:37:26.970204 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:37:26.970234 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:37:26.970288 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:37:26.970963 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:37:26.971584 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:37:26.973398 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:37:26.973843 | orchestrator | 2025-06-22 11:37:26.974432 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-06-22 11:37:26.975051 | orchestrator | Sunday 22 June 2025 11:37:26 +0000 (0:00:01.635) 0:00:59.649 *********** 2025-06-22 11:37:27.524317 | orchestrator | changed: [testbed-manager] 2025-06-22 11:37:27.524463 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:37:27.525156 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:37:27.525667 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:37:27.526237 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:37:27.526660 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:37:27.527109 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:37:27.528838 | orchestrator | 2025-06-22 11:37:27.529371 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-06-22 11:37:27.529878 | orchestrator | Sunday 22 June 2025 11:37:27 +0000 (0:00:00.559) 0:01:00.209 *********** 2025-06-22 11:37:27.609414 | orchestrator | ok: [testbed-manager] 2025-06-22 11:37:27.640364 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:37:27.667858 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:37:27.699206 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:37:27.757501 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:37:27.758320 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:37:27.759838 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:37:27.761128 | orchestrator | 2025-06-22 11:37:27.762120 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-06-22 11:37:27.763224 | orchestrator | Sunday 22 June 2025 11:37:27 +0000 (0:00:00.232) 0:01:00.442 *********** 2025-06-22 11:37:28.867371 | orchestrator | ok: [testbed-manager] 2025-06-22 11:37:28.869374 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:37:28.869482 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:37:28.870719 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:37:28.871264 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:37:28.872403 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:37:28.873018 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:37:28.874074 | orchestrator | 2025-06-22 11:37:28.874685 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-06-22 11:37:28.875376 | orchestrator | Sunday 22 June 2025 11:37:28 +0000 (0:00:01.108) 0:01:01.550 *********** 2025-06-22 11:37:30.577269 | orchestrator | changed: [testbed-manager] 2025-06-22 11:37:30.581112 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:37:30.581150 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:37:30.582508 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:37:30.583116 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:37:30.583638 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:37:30.584206 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:37:30.584646 | orchestrator | 2025-06-22 11:37:30.585356 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-06-22 11:37:30.585901 | orchestrator | Sunday 22 June 2025 11:37:30 +0000 (0:00:01.709) 0:01:03.260 *********** 2025-06-22 11:37:32.930125 | orchestrator | ok: [testbed-manager] 2025-06-22 11:37:32.931005 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:37:32.932451 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:37:32.935223 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:37:32.936215 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:37:32.937215 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:37:32.938341 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:37:32.939368 | orchestrator | 2025-06-22 11:37:32.940156 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-06-22 11:37:32.941093 | orchestrator | Sunday 22 June 2025 11:37:32 +0000 (0:00:02.352) 0:01:05.613 *********** 2025-06-22 11:38:11.324257 | orchestrator | ok: [testbed-manager] 2025-06-22 11:38:11.324361 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:38:11.324372 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:38:11.324786 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:38:11.325678 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:38:11.327859 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:38:11.328734 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:38:11.328793 | orchestrator | 2025-06-22 11:38:11.329574 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-06-22 11:38:11.330147 | orchestrator | Sunday 22 June 2025 11:38:11 +0000 (0:00:38.390) 0:01:44.004 *********** 2025-06-22 11:39:28.855145 | orchestrator | changed: [testbed-manager] 2025-06-22 11:39:28.855268 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:39:28.855284 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:39:28.855295 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:39:28.855306 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:39:28.855381 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:39:28.855965 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:39:28.856435 | orchestrator | 2025-06-22 11:39:28.857884 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-06-22 11:39:28.858959 | orchestrator | Sunday 22 June 2025 11:39:28 +0000 (0:01:17.527) 0:03:01.531 *********** 2025-06-22 11:39:30.413731 | orchestrator | ok: [testbed-manager] 2025-06-22 11:39:30.413966 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:39:30.414976 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:39:30.415757 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:39:30.416881 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:39:30.417466 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:39:30.418190 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:39:30.418704 | orchestrator | 2025-06-22 11:39:30.419261 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-06-22 11:39:30.419935 | orchestrator | Sunday 22 June 2025 11:39:30 +0000 (0:00:01.563) 0:03:03.095 *********** 2025-06-22 11:39:41.975559 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:39:41.975687 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:39:41.975703 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:39:41.975714 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:39:41.975725 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:39:41.976138 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:39:41.977343 | orchestrator | changed: [testbed-manager] 2025-06-22 11:39:41.978886 | orchestrator | 2025-06-22 11:39:41.980131 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-06-22 11:39:41.980839 | orchestrator | Sunday 22 June 2025 11:39:41 +0000 (0:00:11.559) 0:03:14.654 *********** 2025-06-22 11:39:42.384272 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-06-22 11:39:42.384682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-06-22 11:39:42.385447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-06-22 11:39:42.390217 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-06-22 11:39:42.390705 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-06-22 11:39:42.391445 | orchestrator | 2025-06-22 11:39:42.392277 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-06-22 11:39:42.393343 | orchestrator | Sunday 22 June 2025 11:39:42 +0000 (0:00:00.414) 0:03:15.068 *********** 2025-06-22 11:39:42.443598 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-22 11:39:42.473199 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:39:42.473711 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-22 11:39:42.473740 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-22 11:39:42.502429 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:39:42.534497 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-22 11:39:42.534603 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:39:42.558419 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:39:43.057681 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 11:39:43.057861 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 11:39:43.057882 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 11:39:43.058214 | orchestrator | 2025-06-22 11:39:43.059748 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-06-22 11:39:43.063376 | orchestrator | Sunday 22 June 2025 11:39:43 +0000 (0:00:00.673) 0:03:15.741 *********** 2025-06-22 11:39:43.125944 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-22 11:39:43.126052 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-22 11:39:43.126480 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-22 11:39:43.128390 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-22 11:39:43.128404 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-22 11:39:43.176627 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-22 11:39:43.179465 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-22 11:39:43.179491 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-22 11:39:43.179503 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-22 11:39:43.179514 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-22 11:39:43.179525 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-22 11:39:43.179536 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-22 11:39:43.179547 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-22 11:39:43.179558 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-22 11:39:43.179568 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-22 11:39:43.221395 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-22 11:39:43.225408 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:39:43.225440 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-22 11:39:43.225453 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-22 11:39:43.225464 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-22 11:39:43.225475 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-22 11:39:43.225486 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-22 11:39:43.225739 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-22 11:39:43.226465 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-22 11:39:43.273813 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-22 11:39:43.274141 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:39:43.274823 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-22 11:39:43.275635 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-22 11:39:43.276419 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-22 11:39:43.276591 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-22 11:39:43.277188 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-22 11:39:43.277631 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-22 11:39:43.278571 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-22 11:39:43.278594 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-22 11:39:43.278884 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-22 11:39:43.280042 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-22 11:39:43.280090 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-22 11:39:43.280102 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-22 11:39:43.280981 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-22 11:39:43.281386 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-22 11:39:43.281787 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-22 11:39:43.283310 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-22 11:39:43.299709 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:39:49.765848 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:39:49.768072 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-22 11:39:49.769481 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-22 11:39:49.770668 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-22 11:39:49.772168 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-22 11:39:49.772700 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-22 11:39:49.773660 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-22 11:39:49.774852 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-22 11:39:49.775976 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-22 11:39:49.776247 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-22 11:39:49.777544 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-22 11:39:49.778628 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-22 11:39:49.779976 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-22 11:39:49.780074 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-22 11:39:49.781377 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-22 11:39:49.782541 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-22 11:39:49.783318 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-22 11:39:49.784404 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-22 11:39:49.785267 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-22 11:39:49.785696 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-22 11:39:49.786393 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-22 11:39:49.786858 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-22 11:39:49.787293 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-22 11:39:49.787864 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-22 11:39:49.788636 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-22 11:39:49.789111 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-22 11:39:49.789556 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-22 11:39:49.790336 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-22 11:39:49.790672 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-22 11:39:49.794594 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-22 11:39:49.794705 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-22 11:39:49.795216 | orchestrator | 2025-06-22 11:39:49.795454 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-06-22 11:39:49.795967 | orchestrator | Sunday 22 June 2025 11:39:49 +0000 (0:00:06.706) 0:03:22.448 *********** 2025-06-22 11:39:51.215841 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 11:39:51.217762 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 11:39:51.222628 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 11:39:51.222666 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 11:39:51.224108 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 11:39:51.228208 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 11:39:51.228229 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 11:39:51.230725 | orchestrator | 2025-06-22 11:39:51.230773 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-06-22 11:39:51.231108 | orchestrator | Sunday 22 June 2025 11:39:51 +0000 (0:00:01.450) 0:03:23.899 *********** 2025-06-22 11:39:51.273215 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-22 11:39:51.303305 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:39:51.385751 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-22 11:39:53.671872 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:39:53.674224 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-22 11:39:53.674256 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:39:53.675426 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-22 11:39:53.676001 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:39:53.677159 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-22 11:39:53.677876 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-22 11:39:53.678404 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-22 11:39:53.679633 | orchestrator | 2025-06-22 11:39:53.679932 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-06-22 11:39:53.680853 | orchestrator | Sunday 22 June 2025 11:39:53 +0000 (0:00:02.455) 0:03:26.354 *********** 2025-06-22 11:39:53.731091 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-22 11:39:53.759263 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:39:53.860694 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-22 11:39:53.860845 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-22 11:39:55.232954 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:39:55.233474 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:39:55.233502 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-22 11:39:55.233918 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:39:55.235122 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-22 11:39:55.237418 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-22 11:39:55.237586 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-22 11:39:55.238768 | orchestrator | 2025-06-22 11:39:55.239711 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-06-22 11:39:55.240355 | orchestrator | Sunday 22 June 2025 11:39:55 +0000 (0:00:01.562) 0:03:27.916 *********** 2025-06-22 11:39:55.285837 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:39:55.345267 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:39:55.369491 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:39:55.390175 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:39:55.514514 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:39:55.517567 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:39:55.517628 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:39:55.519276 | orchestrator | 2025-06-22 11:39:55.520080 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-06-22 11:39:55.521869 | orchestrator | Sunday 22 June 2025 11:39:55 +0000 (0:00:00.280) 0:03:28.197 *********** 2025-06-22 11:40:01.292992 | orchestrator | ok: [testbed-manager] 2025-06-22 11:40:01.293106 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:40:01.293487 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:40:01.294702 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:40:01.297849 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:40:01.298749 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:40:01.299491 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:40:01.300450 | orchestrator | 2025-06-22 11:40:01.301400 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-06-22 11:40:01.302426 | orchestrator | Sunday 22 June 2025 11:40:01 +0000 (0:00:05.780) 0:03:33.977 *********** 2025-06-22 11:40:01.372827 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-06-22 11:40:01.372951 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-06-22 11:40:01.404975 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:40:01.445439 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:40:01.446088 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-06-22 11:40:01.483307 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:40:01.488617 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-06-22 11:40:01.489234 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-06-22 11:40:01.521574 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:40:01.524650 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-06-22 11:40:01.580642 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:40:01.582122 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:40:01.582717 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-06-22 11:40:01.585463 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:40:01.585665 | orchestrator | 2025-06-22 11:40:01.586119 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-06-22 11:40:01.586594 | orchestrator | Sunday 22 June 2025 11:40:01 +0000 (0:00:00.289) 0:03:34.266 *********** 2025-06-22 11:40:02.746860 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-06-22 11:40:02.747497 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-06-22 11:40:02.748461 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-06-22 11:40:02.749868 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-06-22 11:40:02.751499 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-06-22 11:40:02.752215 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-06-22 11:40:02.753096 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-06-22 11:40:02.753647 | orchestrator | 2025-06-22 11:40:02.755420 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-06-22 11:40:02.756224 | orchestrator | Sunday 22 June 2025 11:40:02 +0000 (0:00:01.161) 0:03:35.428 *********** 2025-06-22 11:40:03.284758 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:40:03.285664 | orchestrator | 2025-06-22 11:40:03.286399 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-06-22 11:40:03.287343 | orchestrator | Sunday 22 June 2025 11:40:03 +0000 (0:00:00.540) 0:03:35.969 *********** 2025-06-22 11:40:04.506831 | orchestrator | ok: [testbed-manager] 2025-06-22 11:40:04.507375 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:40:04.509494 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:40:04.511060 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:40:04.511852 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:40:04.513008 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:40:04.513798 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:40:04.514925 | orchestrator | 2025-06-22 11:40:04.515398 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-06-22 11:40:04.516070 | orchestrator | Sunday 22 June 2025 11:40:04 +0000 (0:00:01.220) 0:03:37.190 *********** 2025-06-22 11:40:05.113331 | orchestrator | ok: [testbed-manager] 2025-06-22 11:40:05.113436 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:40:05.114459 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:40:05.115946 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:40:05.116623 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:40:05.117608 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:40:05.119717 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:40:05.119993 | orchestrator | 2025-06-22 11:40:05.121345 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-06-22 11:40:05.122086 | orchestrator | Sunday 22 June 2025 11:40:05 +0000 (0:00:00.605) 0:03:37.795 *********** 2025-06-22 11:40:05.729791 | orchestrator | changed: [testbed-manager] 2025-06-22 11:40:05.730961 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:40:05.731190 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:40:05.731963 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:40:05.732562 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:40:05.733398 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:40:05.733784 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:40:05.734509 | orchestrator | 2025-06-22 11:40:05.735086 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-06-22 11:40:05.736025 | orchestrator | Sunday 22 June 2025 11:40:05 +0000 (0:00:00.617) 0:03:38.413 *********** 2025-06-22 11:40:06.346720 | orchestrator | ok: [testbed-manager] 2025-06-22 11:40:06.347281 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:40:06.348200 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:40:06.350074 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:40:06.350113 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:40:06.350475 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:40:06.351200 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:40:06.351703 | orchestrator | 2025-06-22 11:40:06.352394 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-06-22 11:40:06.352679 | orchestrator | Sunday 22 June 2025 11:40:06 +0000 (0:00:00.613) 0:03:39.027 *********** 2025-06-22 11:40:07.368224 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750591112.4743211, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 11:40:07.368366 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750591170.162935, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 11:40:07.368503 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750591180.3927598, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 11:40:07.368806 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750591168.8700252, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 11:40:07.369389 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750591179.8085093, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 11:40:07.371486 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750591172.7522917, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 11:40:07.372647 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750591185.3756697, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 11:40:07.373834 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750591149.054857, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 11:40:07.375202 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750591067.8719838, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 11:40:07.375831 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750591068.3600519, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 11:40:07.376790 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750591078.6409304, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 11:40:07.377521 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750591077.3866618, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 11:40:07.378259 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750591073.399845, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 11:40:07.379048 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750591077.804229, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 11:40:07.379619 | orchestrator | 2025-06-22 11:40:07.380276 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-06-22 11:40:07.380670 | orchestrator | Sunday 22 June 2025 11:40:07 +0000 (0:00:01.025) 0:03:40.052 *********** 2025-06-22 11:40:08.484101 | orchestrator | changed: [testbed-manager] 2025-06-22 11:40:08.484196 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:40:08.485151 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:40:08.486422 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:40:08.487290 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:40:08.488378 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:40:08.489231 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:40:08.489755 | orchestrator | 2025-06-22 11:40:08.490381 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-06-22 11:40:08.490906 | orchestrator | Sunday 22 June 2025 11:40:08 +0000 (0:00:01.115) 0:03:41.167 *********** 2025-06-22 11:40:09.536305 | orchestrator | changed: [testbed-manager] 2025-06-22 11:40:09.537134 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:40:09.539364 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:40:09.539466 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:40:09.540999 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:40:09.541508 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:40:09.542355 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:40:09.543697 | orchestrator | 2025-06-22 11:40:09.544263 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-06-22 11:40:09.545209 | orchestrator | Sunday 22 June 2025 11:40:09 +0000 (0:00:01.052) 0:03:42.219 *********** 2025-06-22 11:40:10.601171 | orchestrator | changed: [testbed-manager] 2025-06-22 11:40:10.601537 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:40:10.602838 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:40:10.603933 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:40:10.604759 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:40:10.605484 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:40:10.606708 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:40:10.607394 | orchestrator | 2025-06-22 11:40:10.607994 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-06-22 11:40:10.608685 | orchestrator | Sunday 22 June 2025 11:40:10 +0000 (0:00:01.065) 0:03:43.285 *********** 2025-06-22 11:40:10.651851 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:40:10.676578 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:40:10.711638 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:40:10.741525 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:40:10.814131 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:40:10.818006 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:40:10.818097 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:40:10.818120 | orchestrator | 2025-06-22 11:40:10.818140 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-06-22 11:40:10.818160 | orchestrator | Sunday 22 June 2025 11:40:10 +0000 (0:00:00.214) 0:03:43.499 *********** 2025-06-22 11:40:11.503835 | orchestrator | ok: [testbed-manager] 2025-06-22 11:40:11.506065 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:40:11.506820 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:40:11.507713 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:40:11.509357 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:40:11.511265 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:40:11.511947 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:40:11.512941 | orchestrator | 2025-06-22 11:40:11.513693 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-06-22 11:40:11.514465 | orchestrator | Sunday 22 June 2025 11:40:11 +0000 (0:00:00.686) 0:03:44.186 *********** 2025-06-22 11:40:11.820150 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:40:11.821202 | orchestrator | 2025-06-22 11:40:11.822862 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-06-22 11:40:11.824058 | orchestrator | Sunday 22 June 2025 11:40:11 +0000 (0:00:00.318) 0:03:44.505 *********** 2025-06-22 11:40:19.886485 | orchestrator | ok: [testbed-manager] 2025-06-22 11:40:19.887953 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:40:19.889081 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:40:19.890273 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:40:19.891479 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:40:19.892106 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:40:19.892749 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:40:19.893439 | orchestrator | 2025-06-22 11:40:19.894097 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-06-22 11:40:19.894712 | orchestrator | Sunday 22 June 2025 11:40:19 +0000 (0:00:08.063) 0:03:52.569 *********** 2025-06-22 11:40:21.059758 | orchestrator | ok: [testbed-manager] 2025-06-22 11:40:21.063245 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:40:21.064001 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:40:21.064772 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:40:21.065697 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:40:21.066534 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:40:21.067057 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:40:21.067977 | orchestrator | 2025-06-22 11:40:21.068747 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-06-22 11:40:21.069435 | orchestrator | Sunday 22 June 2025 11:40:21 +0000 (0:00:01.173) 0:03:53.743 *********** 2025-06-22 11:40:22.098387 | orchestrator | ok: [testbed-manager] 2025-06-22 11:40:22.098683 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:40:22.099630 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:40:22.102261 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:40:22.102306 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:40:22.102328 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:40:22.103197 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:40:22.104540 | orchestrator | 2025-06-22 11:40:22.105210 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-06-22 11:40:22.106144 | orchestrator | Sunday 22 June 2025 11:40:22 +0000 (0:00:01.038) 0:03:54.781 *********** 2025-06-22 11:40:22.585377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:40:22.585788 | orchestrator | 2025-06-22 11:40:22.587294 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-06-22 11:40:22.590464 | orchestrator | Sunday 22 June 2025 11:40:22 +0000 (0:00:00.488) 0:03:55.269 *********** 2025-06-22 11:40:31.080557 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:40:31.083408 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:40:31.083440 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:40:31.085647 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:40:31.086659 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:40:31.087346 | orchestrator | changed: [testbed-manager] 2025-06-22 11:40:31.088196 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:40:31.089198 | orchestrator | 2025-06-22 11:40:31.090297 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-06-22 11:40:31.091106 | orchestrator | Sunday 22 June 2025 11:40:31 +0000 (0:00:08.494) 0:04:03.764 *********** 2025-06-22 11:40:31.713893 | orchestrator | changed: [testbed-manager] 2025-06-22 11:40:31.713988 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:40:31.716339 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:40:31.717343 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:40:31.718201 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:40:31.719452 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:40:31.719988 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:40:31.721017 | orchestrator | 2025-06-22 11:40:31.721712 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-06-22 11:40:31.722479 | orchestrator | Sunday 22 June 2025 11:40:31 +0000 (0:00:00.633) 0:04:04.397 *********** 2025-06-22 11:40:32.845016 | orchestrator | changed: [testbed-manager] 2025-06-22 11:40:32.846659 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:40:32.848397 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:40:32.849444 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:40:32.850453 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:40:32.851400 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:40:32.853795 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:40:32.855462 | orchestrator | 2025-06-22 11:40:32.857322 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-06-22 11:40:32.858168 | orchestrator | Sunday 22 June 2025 11:40:32 +0000 (0:00:01.131) 0:04:05.529 *********** 2025-06-22 11:40:33.973280 | orchestrator | changed: [testbed-manager] 2025-06-22 11:40:33.973385 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:40:33.973724 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:40:33.973748 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:40:33.973927 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:40:33.974535 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:40:33.975016 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:40:33.975992 | orchestrator | 2025-06-22 11:40:33.976016 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-06-22 11:40:33.976030 | orchestrator | Sunday 22 June 2025 11:40:33 +0000 (0:00:01.127) 0:04:06.657 *********** 2025-06-22 11:40:34.087259 | orchestrator | ok: [testbed-manager] 2025-06-22 11:40:34.126579 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:40:34.165781 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:40:34.201591 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:40:34.264248 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:40:34.264913 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:40:34.265539 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:40:34.266312 | orchestrator | 2025-06-22 11:40:34.267042 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-06-22 11:40:34.267731 | orchestrator | Sunday 22 June 2025 11:40:34 +0000 (0:00:00.291) 0:04:06.949 *********** 2025-06-22 11:40:34.395787 | orchestrator | ok: [testbed-manager] 2025-06-22 11:40:34.438254 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:40:34.487956 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:40:34.533682 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:40:34.610780 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:40:34.611039 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:40:34.611788 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:40:34.612804 | orchestrator | 2025-06-22 11:40:34.614707 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-06-22 11:40:34.614758 | orchestrator | Sunday 22 June 2025 11:40:34 +0000 (0:00:00.345) 0:04:07.295 *********** 2025-06-22 11:40:34.716929 | orchestrator | ok: [testbed-manager] 2025-06-22 11:40:34.754233 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:40:34.791454 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:40:34.828526 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:40:34.908382 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:40:34.908543 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:40:34.909683 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:40:34.913370 | orchestrator | 2025-06-22 11:40:34.914434 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-06-22 11:40:34.915208 | orchestrator | Sunday 22 June 2025 11:40:34 +0000 (0:00:00.296) 0:04:07.592 *********** 2025-06-22 11:40:40.473326 | orchestrator | ok: [testbed-manager] 2025-06-22 11:40:40.473636 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:40:40.475432 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:40:40.475463 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:40:40.476171 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:40:40.476805 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:40:40.477291 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:40:40.478165 | orchestrator | 2025-06-22 11:40:40.478874 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-06-22 11:40:40.479276 | orchestrator | Sunday 22 June 2025 11:40:40 +0000 (0:00:05.563) 0:04:13.155 *********** 2025-06-22 11:40:40.834747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:40:40.835699 | orchestrator | 2025-06-22 11:40:40.836782 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-06-22 11:40:40.839238 | orchestrator | Sunday 22 June 2025 11:40:40 +0000 (0:00:00.363) 0:04:13.518 *********** 2025-06-22 11:40:40.921251 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-06-22 11:40:40.921706 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-06-22 11:40:40.921737 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-06-22 11:40:40.922626 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-06-22 11:40:40.956425 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:40:41.016978 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:40:41.017050 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-06-22 11:40:41.017058 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-06-22 11:40:41.058946 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-06-22 11:40:41.061026 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:40:41.061279 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-06-22 11:40:41.102381 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-06-22 11:40:41.103216 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:40:41.104313 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-06-22 11:40:41.175209 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-06-22 11:40:41.175468 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:40:41.176748 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-06-22 11:40:41.177722 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:40:41.178390 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-06-22 11:40:41.179273 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-06-22 11:40:41.179943 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:40:41.180705 | orchestrator | 2025-06-22 11:40:41.182667 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-06-22 11:40:41.183537 | orchestrator | Sunday 22 June 2025 11:40:41 +0000 (0:00:00.340) 0:04:13.859 *********** 2025-06-22 11:40:41.594283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:40:41.594526 | orchestrator | 2025-06-22 11:40:41.595543 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-06-22 11:40:41.595781 | orchestrator | Sunday 22 June 2025 11:40:41 +0000 (0:00:00.415) 0:04:14.275 *********** 2025-06-22 11:40:41.666780 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-06-22 11:40:41.707830 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-06-22 11:40:41.708587 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:40:41.709609 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-06-22 11:40:41.741939 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:40:41.742901 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-06-22 11:40:41.775171 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:40:41.813061 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-06-22 11:40:41.813530 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:40:41.897082 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:40:41.897678 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-06-22 11:40:41.898675 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:40:41.899741 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-06-22 11:40:41.900840 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:40:41.902441 | orchestrator | 2025-06-22 11:40:41.903198 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-06-22 11:40:41.903834 | orchestrator | Sunday 22 June 2025 11:40:41 +0000 (0:00:00.304) 0:04:14.580 *********** 2025-06-22 11:40:42.418266 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:40:42.418933 | orchestrator | 2025-06-22 11:40:42.419967 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-06-22 11:40:42.421289 | orchestrator | Sunday 22 June 2025 11:40:42 +0000 (0:00:00.520) 0:04:15.100 *********** 2025-06-22 11:41:17.012347 | orchestrator | changed: [testbed-manager] 2025-06-22 11:41:17.012465 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:41:17.014998 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:41:17.017027 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:41:17.018639 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:41:17.020223 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:41:17.021190 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:41:17.022301 | orchestrator | 2025-06-22 11:41:17.022585 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-06-22 11:41:17.023436 | orchestrator | Sunday 22 June 2025 11:41:17 +0000 (0:00:34.593) 0:04:49.693 *********** 2025-06-22 11:41:24.911232 | orchestrator | changed: [testbed-manager] 2025-06-22 11:41:24.912284 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:41:24.913553 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:41:24.916911 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:41:24.917440 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:41:24.918962 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:41:24.920035 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:41:24.921012 | orchestrator | 2025-06-22 11:41:24.922084 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-06-22 11:41:24.922856 | orchestrator | Sunday 22 June 2025 11:41:24 +0000 (0:00:07.899) 0:04:57.593 *********** 2025-06-22 11:41:32.243590 | orchestrator | changed: [testbed-manager] 2025-06-22 11:41:32.243897 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:41:32.245677 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:41:32.247223 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:41:32.248578 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:41:32.249353 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:41:32.250070 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:41:32.250734 | orchestrator | 2025-06-22 11:41:32.251455 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-06-22 11:41:32.252632 | orchestrator | Sunday 22 June 2025 11:41:32 +0000 (0:00:07.333) 0:05:04.927 *********** 2025-06-22 11:41:33.824584 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:41:33.825545 | orchestrator | ok: [testbed-manager] 2025-06-22 11:41:33.826429 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:41:33.827463 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:41:33.828609 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:41:33.829499 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:41:33.830317 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:41:33.830774 | orchestrator | 2025-06-22 11:41:33.831704 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-06-22 11:41:33.832138 | orchestrator | Sunday 22 June 2025 11:41:33 +0000 (0:00:01.581) 0:05:06.508 *********** 2025-06-22 11:41:39.686763 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:41:39.686989 | orchestrator | changed: [testbed-manager] 2025-06-22 11:41:39.687574 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:41:39.690106 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:41:39.690165 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:41:39.691281 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:41:39.692039 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:41:39.692761 | orchestrator | 2025-06-22 11:41:39.693465 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-06-22 11:41:39.694304 | orchestrator | Sunday 22 June 2025 11:41:39 +0000 (0:00:05.860) 0:05:12.369 *********** 2025-06-22 11:41:40.097112 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:41:40.098098 | orchestrator | 2025-06-22 11:41:40.099269 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-06-22 11:41:40.100386 | orchestrator | Sunday 22 June 2025 11:41:40 +0000 (0:00:00.412) 0:05:12.781 *********** 2025-06-22 11:41:40.835512 | orchestrator | changed: [testbed-manager] 2025-06-22 11:41:40.836115 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:41:40.837476 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:41:40.838739 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:41:40.839581 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:41:40.841097 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:41:40.842355 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:41:40.843269 | orchestrator | 2025-06-22 11:41:40.844411 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-06-22 11:41:40.845679 | orchestrator | Sunday 22 June 2025 11:41:40 +0000 (0:00:00.737) 0:05:13.518 *********** 2025-06-22 11:41:42.515302 | orchestrator | ok: [testbed-manager] 2025-06-22 11:41:42.516579 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:41:42.517548 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:41:42.518706 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:41:42.519342 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:41:42.520450 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:41:42.521633 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:41:42.521863 | orchestrator | 2025-06-22 11:41:42.522547 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-06-22 11:41:42.522992 | orchestrator | Sunday 22 June 2025 11:41:42 +0000 (0:00:01.679) 0:05:15.197 *********** 2025-06-22 11:41:43.273002 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:41:43.273669 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:41:43.275300 | orchestrator | changed: [testbed-manager] 2025-06-22 11:41:43.276466 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:41:43.277373 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:41:43.278492 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:41:43.279469 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:41:43.280703 | orchestrator | 2025-06-22 11:41:43.280984 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-06-22 11:41:43.282174 | orchestrator | Sunday 22 June 2025 11:41:43 +0000 (0:00:00.758) 0:05:15.955 *********** 2025-06-22 11:41:43.389764 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:41:43.442177 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:41:43.486898 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:41:43.522098 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:41:43.593098 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:41:43.593395 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:41:43.594593 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:41:43.594909 | orchestrator | 2025-06-22 11:41:43.595661 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-06-22 11:41:43.596539 | orchestrator | Sunday 22 June 2025 11:41:43 +0000 (0:00:00.322) 0:05:16.277 *********** 2025-06-22 11:41:43.692306 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:41:43.727477 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:41:43.757832 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:41:43.791462 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:41:43.976876 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:41:43.978395 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:41:43.980028 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:41:43.980639 | orchestrator | 2025-06-22 11:41:43.981750 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-06-22 11:41:43.983090 | orchestrator | Sunday 22 June 2025 11:41:43 +0000 (0:00:00.381) 0:05:16.659 *********** 2025-06-22 11:41:44.071028 | orchestrator | ok: [testbed-manager] 2025-06-22 11:41:44.103961 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:41:44.147532 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:41:44.183762 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:41:44.218160 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:41:44.315536 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:41:44.315773 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:41:44.317699 | orchestrator | 2025-06-22 11:41:44.318839 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-06-22 11:41:44.320110 | orchestrator | Sunday 22 June 2025 11:41:44 +0000 (0:00:00.340) 0:05:16.999 *********** 2025-06-22 11:41:44.387842 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:41:44.420232 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:41:44.503751 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:41:44.537114 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:41:44.612673 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:41:44.612890 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:41:44.614634 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:41:44.615284 | orchestrator | 2025-06-22 11:41:44.616247 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-06-22 11:41:44.618541 | orchestrator | Sunday 22 June 2025 11:41:44 +0000 (0:00:00.297) 0:05:17.297 *********** 2025-06-22 11:41:44.715175 | orchestrator | ok: [testbed-manager] 2025-06-22 11:41:44.760946 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:41:44.815551 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:41:44.852150 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:41:44.932741 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:41:44.933404 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:41:44.934889 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:41:44.935051 | orchestrator | 2025-06-22 11:41:44.935886 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-06-22 11:41:44.936469 | orchestrator | Sunday 22 June 2025 11:41:44 +0000 (0:00:00.319) 0:05:17.617 *********** 2025-06-22 11:41:45.003842 | orchestrator | ok: [testbed-manager] =>  2025-06-22 11:41:45.007277 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 11:41:45.038073 | orchestrator | ok: [testbed-node-3] =>  2025-06-22 11:41:45.038342 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 11:41:45.078438 | orchestrator | ok: [testbed-node-4] =>  2025-06-22 11:41:45.079174 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 11:41:45.110445 | orchestrator | ok: [testbed-node-5] =>  2025-06-22 11:41:45.112147 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 11:41:45.220280 | orchestrator | ok: [testbed-node-0] =>  2025-06-22 11:41:45.220361 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 11:41:45.221511 | orchestrator | ok: [testbed-node-1] =>  2025-06-22 11:41:45.222223 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 11:41:45.225653 | orchestrator | ok: [testbed-node-2] =>  2025-06-22 11:41:45.225966 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 11:41:45.227526 | orchestrator | 2025-06-22 11:41:45.229475 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-06-22 11:41:45.230235 | orchestrator | Sunday 22 June 2025 11:41:45 +0000 (0:00:00.287) 0:05:17.905 *********** 2025-06-22 11:41:45.357820 | orchestrator | ok: [testbed-manager] =>  2025-06-22 11:41:45.358072 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 11:41:45.505115 | orchestrator | ok: [testbed-node-3] =>  2025-06-22 11:41:45.505378 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 11:41:45.541140 | orchestrator | ok: [testbed-node-4] =>  2025-06-22 11:41:45.541735 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 11:41:45.574713 | orchestrator | ok: [testbed-node-5] =>  2025-06-22 11:41:45.575304 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 11:41:45.641540 | orchestrator | ok: [testbed-node-0] =>  2025-06-22 11:41:45.642404 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 11:41:45.643205 | orchestrator | ok: [testbed-node-1] =>  2025-06-22 11:41:45.644759 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 11:41:45.644823 | orchestrator | ok: [testbed-node-2] =>  2025-06-22 11:41:45.645959 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 11:41:45.646422 | orchestrator | 2025-06-22 11:41:45.647564 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-06-22 11:41:45.648446 | orchestrator | Sunday 22 June 2025 11:41:45 +0000 (0:00:00.420) 0:05:18.325 *********** 2025-06-22 11:41:45.707843 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:41:45.739987 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:41:45.775573 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:41:45.806569 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:41:45.842697 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:41:45.906337 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:41:45.907083 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:41:45.908479 | orchestrator | 2025-06-22 11:41:45.909689 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-06-22 11:41:45.910273 | orchestrator | Sunday 22 June 2025 11:41:45 +0000 (0:00:00.265) 0:05:18.591 *********** 2025-06-22 11:41:45.982250 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:41:46.015456 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:41:46.050730 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:41:46.082360 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:41:46.113339 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:41:46.178173 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:41:46.180528 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:41:46.182544 | orchestrator | 2025-06-22 11:41:46.183965 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-06-22 11:41:46.185334 | orchestrator | Sunday 22 June 2025 11:41:46 +0000 (0:00:00.270) 0:05:18.861 *********** 2025-06-22 11:41:46.616996 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:41:46.617171 | orchestrator | 2025-06-22 11:41:46.618199 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-06-22 11:41:46.619241 | orchestrator | Sunday 22 June 2025 11:41:46 +0000 (0:00:00.438) 0:05:19.300 *********** 2025-06-22 11:41:47.524051 | orchestrator | ok: [testbed-manager] 2025-06-22 11:41:47.524950 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:41:47.525472 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:41:47.526070 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:41:47.526589 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:41:47.527897 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:41:47.528458 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:41:47.529439 | orchestrator | 2025-06-22 11:41:47.530570 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-06-22 11:41:47.531160 | orchestrator | Sunday 22 June 2025 11:41:47 +0000 (0:00:00.906) 0:05:20.206 *********** 2025-06-22 11:41:50.372384 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:41:50.373037 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:41:50.373761 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:41:50.374687 | orchestrator | ok: [testbed-manager] 2025-06-22 11:41:50.375448 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:41:50.376247 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:41:50.376986 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:41:50.377935 | orchestrator | 2025-06-22 11:41:50.378532 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-06-22 11:41:50.378984 | orchestrator | Sunday 22 June 2025 11:41:50 +0000 (0:00:02.849) 0:05:23.056 *********** 2025-06-22 11:41:50.441067 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-06-22 11:41:50.518722 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-06-22 11:41:50.522509 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-06-22 11:41:50.522545 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-06-22 11:41:50.522558 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-06-22 11:41:50.522570 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-06-22 11:41:50.585367 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:41:50.585990 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-06-22 11:41:50.592435 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-06-22 11:41:50.592503 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-06-22 11:41:50.827248 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:41:50.827589 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-06-22 11:41:50.828377 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-06-22 11:41:50.829399 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-06-22 11:41:50.900557 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:41:50.900989 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-06-22 11:41:50.901842 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-06-22 11:41:50.902466 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-06-22 11:41:50.979104 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:41:50.979406 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-06-22 11:41:50.979954 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-06-22 11:41:50.980447 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-06-22 11:41:51.129675 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:41:51.130080 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:41:51.131074 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-06-22 11:41:51.132156 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-06-22 11:41:51.135815 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-06-22 11:41:51.135847 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:41:51.135860 | orchestrator | 2025-06-22 11:41:51.135872 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-06-22 11:41:51.135885 | orchestrator | Sunday 22 June 2025 11:41:51 +0000 (0:00:00.757) 0:05:23.813 *********** 2025-06-22 11:41:57.262622 | orchestrator | ok: [testbed-manager] 2025-06-22 11:41:57.262937 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:41:57.265647 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:41:57.266563 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:41:57.268036 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:41:57.269145 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:41:57.269469 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:41:57.270319 | orchestrator | 2025-06-22 11:41:57.270696 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-06-22 11:41:57.271448 | orchestrator | Sunday 22 June 2025 11:41:57 +0000 (0:00:06.130) 0:05:29.943 *********** 2025-06-22 11:41:58.440993 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:41:58.442488 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:41:58.443985 | orchestrator | ok: [testbed-manager] 2025-06-22 11:41:58.445035 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:41:58.445437 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:41:58.447498 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:41:58.448953 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:41:58.450454 | orchestrator | 2025-06-22 11:41:58.451331 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-06-22 11:41:58.452400 | orchestrator | Sunday 22 June 2025 11:41:58 +0000 (0:00:01.179) 0:05:31.122 *********** 2025-06-22 11:42:05.974254 | orchestrator | ok: [testbed-manager] 2025-06-22 11:42:05.974432 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:42:05.977724 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:42:05.979897 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:42:05.980572 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:42:05.981475 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:42:05.982427 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:42:05.983155 | orchestrator | 2025-06-22 11:42:05.983709 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-06-22 11:42:05.984209 | orchestrator | Sunday 22 June 2025 11:42:05 +0000 (0:00:07.534) 0:05:38.657 *********** 2025-06-22 11:42:09.049364 | orchestrator | changed: [testbed-manager] 2025-06-22 11:42:09.049664 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:42:09.049700 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:42:09.049867 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:42:09.051126 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:42:09.051217 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:42:09.051872 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:42:09.052869 | orchestrator | 2025-06-22 11:42:09.053058 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-06-22 11:42:09.055556 | orchestrator | Sunday 22 June 2025 11:42:09 +0000 (0:00:03.073) 0:05:41.731 *********** 2025-06-22 11:42:10.601603 | orchestrator | ok: [testbed-manager] 2025-06-22 11:42:10.601814 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:42:10.603263 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:42:10.603361 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:42:10.603862 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:42:10.604802 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:42:10.605319 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:42:10.605956 | orchestrator | 2025-06-22 11:42:10.606433 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-06-22 11:42:10.606890 | orchestrator | Sunday 22 June 2025 11:42:10 +0000 (0:00:01.552) 0:05:43.283 *********** 2025-06-22 11:42:12.002271 | orchestrator | ok: [testbed-manager] 2025-06-22 11:42:12.002384 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:42:12.002694 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:42:12.003596 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:42:12.005666 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:42:12.006391 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:42:12.007267 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:42:12.008292 | orchestrator | 2025-06-22 11:42:12.009314 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-06-22 11:42:12.009695 | orchestrator | Sunday 22 June 2025 11:42:11 +0000 (0:00:01.401) 0:05:44.685 *********** 2025-06-22 11:42:12.204839 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:42:12.267004 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:42:12.332545 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:42:12.400468 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:42:12.590000 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:42:12.590455 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:42:12.591716 | orchestrator | changed: [testbed-manager] 2025-06-22 11:42:12.592712 | orchestrator | 2025-06-22 11:42:12.593919 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-06-22 11:42:12.594828 | orchestrator | Sunday 22 June 2025 11:42:12 +0000 (0:00:00.587) 0:05:45.272 *********** 2025-06-22 11:42:22.251377 | orchestrator | ok: [testbed-manager] 2025-06-22 11:42:22.253065 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:42:22.254606 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:42:22.255874 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:42:22.256819 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:42:22.257645 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:42:22.258607 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:42:22.259332 | orchestrator | 2025-06-22 11:42:22.260375 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-06-22 11:42:22.261423 | orchestrator | Sunday 22 June 2025 11:42:22 +0000 (0:00:09.659) 0:05:54.932 *********** 2025-06-22 11:42:23.327986 | orchestrator | changed: [testbed-manager] 2025-06-22 11:42:23.328234 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:42:23.329619 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:42:23.333170 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:42:23.333981 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:42:23.335514 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:42:23.336534 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:42:23.337436 | orchestrator | 2025-06-22 11:42:23.338537 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-06-22 11:42:23.339474 | orchestrator | Sunday 22 June 2025 11:42:23 +0000 (0:00:01.078) 0:05:56.010 *********** 2025-06-22 11:42:31.807904 | orchestrator | ok: [testbed-manager] 2025-06-22 11:42:31.808649 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:42:31.809564 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:42:31.811604 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:42:31.812373 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:42:31.813417 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:42:31.813508 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:42:31.814256 | orchestrator | 2025-06-22 11:42:31.814888 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-06-22 11:42:31.815598 | orchestrator | Sunday 22 June 2025 11:42:31 +0000 (0:00:08.478) 0:06:04.489 *********** 2025-06-22 11:42:42.152327 | orchestrator | ok: [testbed-manager] 2025-06-22 11:42:42.152536 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:42:42.153162 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:42:42.154863 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:42:42.155756 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:42:42.156945 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:42:42.157269 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:42:42.158270 | orchestrator | 2025-06-22 11:42:42.159048 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-06-22 11:42:42.159924 | orchestrator | Sunday 22 June 2025 11:42:42 +0000 (0:00:10.344) 0:06:14.834 *********** 2025-06-22 11:42:42.505170 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-06-22 11:42:43.328482 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-06-22 11:42:43.329029 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-06-22 11:42:43.330248 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-06-22 11:42:43.331436 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-06-22 11:42:43.332341 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-06-22 11:42:43.333663 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-06-22 11:42:43.334986 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-06-22 11:42:43.336174 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-06-22 11:42:43.336336 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-06-22 11:42:43.337141 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-06-22 11:42:43.338134 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-06-22 11:42:43.338342 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-06-22 11:42:43.338982 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-06-22 11:42:43.339576 | orchestrator | 2025-06-22 11:42:43.340003 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-06-22 11:42:43.340560 | orchestrator | Sunday 22 June 2025 11:42:43 +0000 (0:00:01.179) 0:06:16.013 *********** 2025-06-22 11:42:43.458215 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:42:43.526457 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:42:43.588182 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:42:43.650776 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:42:43.718778 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:42:43.837146 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:42:43.837338 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:42:43.838400 | orchestrator | 2025-06-22 11:42:43.839371 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-06-22 11:42:43.839860 | orchestrator | Sunday 22 June 2025 11:42:43 +0000 (0:00:00.506) 0:06:16.519 *********** 2025-06-22 11:42:47.486569 | orchestrator | ok: [testbed-manager] 2025-06-22 11:42:47.487334 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:42:47.488938 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:42:47.490317 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:42:47.490957 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:42:47.492808 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:42:47.494519 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:42:47.494846 | orchestrator | 2025-06-22 11:42:47.495885 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-06-22 11:42:47.497034 | orchestrator | Sunday 22 June 2025 11:42:47 +0000 (0:00:03.648) 0:06:20.168 *********** 2025-06-22 11:42:47.621492 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:42:47.685571 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:42:47.749136 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:42:47.819602 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:42:47.883341 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:42:47.979826 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:42:47.980454 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:42:47.981230 | orchestrator | 2025-06-22 11:42:47.981942 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-06-22 11:42:47.984652 | orchestrator | Sunday 22 June 2025 11:42:47 +0000 (0:00:00.494) 0:06:20.663 *********** 2025-06-22 11:42:48.056383 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-06-22 11:42:48.056571 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-06-22 11:42:48.124814 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:42:48.125445 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-06-22 11:42:48.126586 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-06-22 11:42:48.206064 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:42:48.206526 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-06-22 11:42:48.207579 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-06-22 11:42:48.280025 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:42:48.280569 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-06-22 11:42:48.281618 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-06-22 11:42:48.350176 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:42:48.350939 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-06-22 11:42:48.351978 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-06-22 11:42:48.419660 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:42:48.420096 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-06-22 11:42:48.421102 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-06-22 11:42:48.528130 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:42:48.528279 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-06-22 11:42:48.528387 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-06-22 11:42:48.528914 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:42:48.529674 | orchestrator | 2025-06-22 11:42:48.534346 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-06-22 11:42:48.534390 | orchestrator | Sunday 22 June 2025 11:42:48 +0000 (0:00:00.549) 0:06:21.212 *********** 2025-06-22 11:42:48.654514 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:42:48.721072 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:42:48.782426 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:42:48.847608 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:42:48.913667 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:42:49.008146 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:42:49.009242 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:42:49.010374 | orchestrator | 2025-06-22 11:42:49.011405 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-06-22 11:42:49.014546 | orchestrator | Sunday 22 June 2025 11:42:49 +0000 (0:00:00.478) 0:06:21.691 *********** 2025-06-22 11:42:49.147424 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:42:49.211325 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:42:49.275805 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:42:49.344465 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:42:49.419014 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:42:49.524218 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:42:49.524306 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:42:49.524854 | orchestrator | 2025-06-22 11:42:49.525414 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-06-22 11:42:49.526210 | orchestrator | Sunday 22 June 2025 11:42:49 +0000 (0:00:00.516) 0:06:22.207 *********** 2025-06-22 11:42:49.655004 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:42:49.717790 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:42:49.948073 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:42:50.017881 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:42:50.080757 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:42:50.213659 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:42:50.214264 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:42:50.214667 | orchestrator | 2025-06-22 11:42:50.215414 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-06-22 11:42:50.215995 | orchestrator | Sunday 22 June 2025 11:42:50 +0000 (0:00:00.689) 0:06:22.897 *********** 2025-06-22 11:42:51.806409 | orchestrator | ok: [testbed-manager] 2025-06-22 11:42:51.806756 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:42:51.807338 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:42:51.807713 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:42:51.808249 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:42:51.809030 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:42:51.810285 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:42:51.810667 | orchestrator | 2025-06-22 11:42:51.811307 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-06-22 11:42:51.813462 | orchestrator | Sunday 22 June 2025 11:42:51 +0000 (0:00:01.591) 0:06:24.488 *********** 2025-06-22 11:42:52.604185 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:42:52.604925 | orchestrator | 2025-06-22 11:42:52.608386 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-06-22 11:42:52.609328 | orchestrator | Sunday 22 June 2025 11:42:52 +0000 (0:00:00.794) 0:06:25.283 *********** 2025-06-22 11:42:53.008969 | orchestrator | ok: [testbed-manager] 2025-06-22 11:42:53.410922 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:42:53.411315 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:42:53.412204 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:42:53.413190 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:42:53.413685 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:42:53.414228 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:42:53.414812 | orchestrator | 2025-06-22 11:42:53.415214 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-06-22 11:42:53.416918 | orchestrator | Sunday 22 June 2025 11:42:53 +0000 (0:00:00.810) 0:06:26.094 *********** 2025-06-22 11:42:53.840358 | orchestrator | ok: [testbed-manager] 2025-06-22 11:42:53.904832 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:42:54.506190 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:42:54.506500 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:42:54.507397 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:42:54.508425 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:42:54.508964 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:42:54.513200 | orchestrator | 2025-06-22 11:42:54.514139 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-06-22 11:42:54.514831 | orchestrator | Sunday 22 June 2025 11:42:54 +0000 (0:00:01.095) 0:06:27.189 *********** 2025-06-22 11:42:55.820270 | orchestrator | ok: [testbed-manager] 2025-06-22 11:42:55.820460 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:42:55.821293 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:42:55.821955 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:42:55.822960 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:42:55.823674 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:42:55.824196 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:42:55.824876 | orchestrator | 2025-06-22 11:42:55.825595 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-06-22 11:42:55.826653 | orchestrator | Sunday 22 June 2025 11:42:55 +0000 (0:00:01.314) 0:06:28.503 *********** 2025-06-22 11:42:56.038893 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:42:57.233113 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:42:57.233344 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:42:57.235047 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:42:57.236054 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:42:57.236959 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:42:57.238326 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:42:57.238962 | orchestrator | 2025-06-22 11:42:57.240301 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-06-22 11:42:57.242494 | orchestrator | Sunday 22 June 2025 11:42:57 +0000 (0:00:01.410) 0:06:29.914 *********** 2025-06-22 11:42:58.533126 | orchestrator | ok: [testbed-manager] 2025-06-22 11:42:58.537227 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:42:58.538165 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:42:58.540322 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:42:58.541595 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:42:58.543206 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:42:58.544040 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:42:58.545119 | orchestrator | 2025-06-22 11:42:58.546315 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-06-22 11:42:58.547072 | orchestrator | Sunday 22 June 2025 11:42:58 +0000 (0:00:01.300) 0:06:31.214 *********** 2025-06-22 11:43:00.092938 | orchestrator | changed: [testbed-manager] 2025-06-22 11:43:00.093512 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:43:00.094973 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:43:00.096659 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:43:00.097796 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:43:00.098342 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:43:00.098843 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:43:00.100627 | orchestrator | 2025-06-22 11:43:00.100681 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-06-22 11:43:00.102104 | orchestrator | Sunday 22 June 2025 11:43:00 +0000 (0:00:01.560) 0:06:32.775 *********** 2025-06-22 11:43:01.049222 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:43:01.049833 | orchestrator | 2025-06-22 11:43:01.050692 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-06-22 11:43:01.051264 | orchestrator | Sunday 22 June 2025 11:43:01 +0000 (0:00:00.955) 0:06:33.730 *********** 2025-06-22 11:43:02.428989 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:43:02.430009 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:02.430509 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:43:02.433085 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:43:02.433665 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:43:02.434596 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:43:02.435677 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:43:02.436299 | orchestrator | 2025-06-22 11:43:02.437069 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-06-22 11:43:02.437635 | orchestrator | Sunday 22 June 2025 11:43:02 +0000 (0:00:01.380) 0:06:35.110 *********** 2025-06-22 11:43:03.573225 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:03.574699 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:43:03.577040 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:43:03.577699 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:43:03.578304 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:43:03.579460 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:43:03.580399 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:43:03.580825 | orchestrator | 2025-06-22 11:43:03.581689 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-06-22 11:43:03.582299 | orchestrator | Sunday 22 June 2025 11:43:03 +0000 (0:00:01.143) 0:06:36.254 *********** 2025-06-22 11:43:05.174799 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:05.175475 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:43:05.176812 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:43:05.178368 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:43:05.179908 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:43:05.180893 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:43:05.181511 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:43:05.182653 | orchestrator | 2025-06-22 11:43:05.183209 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-06-22 11:43:05.183775 | orchestrator | Sunday 22 June 2025 11:43:05 +0000 (0:00:01.601) 0:06:37.855 *********** 2025-06-22 11:43:06.390653 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:43:06.390889 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:06.391593 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:43:06.392227 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:43:06.392815 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:43:06.393724 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:43:06.394857 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:43:06.394982 | orchestrator | 2025-06-22 11:43:06.395875 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-06-22 11:43:06.396507 | orchestrator | Sunday 22 June 2025 11:43:06 +0000 (0:00:01.213) 0:06:39.069 *********** 2025-06-22 11:43:07.639588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:43:07.640232 | orchestrator | 2025-06-22 11:43:07.642331 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 11:43:07.643983 | orchestrator | Sunday 22 June 2025 11:43:07 +0000 (0:00:00.941) 0:06:40.011 *********** 2025-06-22 11:43:07.644814 | orchestrator | 2025-06-22 11:43:07.645505 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 11:43:07.647264 | orchestrator | Sunday 22 June 2025 11:43:07 +0000 (0:00:00.039) 0:06:40.050 *********** 2025-06-22 11:43:07.648257 | orchestrator | 2025-06-22 11:43:07.648907 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 11:43:07.650162 | orchestrator | Sunday 22 June 2025 11:43:07 +0000 (0:00:00.068) 0:06:40.118 *********** 2025-06-22 11:43:07.650956 | orchestrator | 2025-06-22 11:43:07.651771 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 11:43:07.653179 | orchestrator | Sunday 22 June 2025 11:43:07 +0000 (0:00:00.039) 0:06:40.158 *********** 2025-06-22 11:43:07.654167 | orchestrator | 2025-06-22 11:43:07.655374 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 11:43:07.657147 | orchestrator | Sunday 22 June 2025 11:43:07 +0000 (0:00:00.038) 0:06:40.197 *********** 2025-06-22 11:43:07.657754 | orchestrator | 2025-06-22 11:43:07.658558 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 11:43:07.659453 | orchestrator | Sunday 22 June 2025 11:43:07 +0000 (0:00:00.046) 0:06:40.243 *********** 2025-06-22 11:43:07.660156 | orchestrator | 2025-06-22 11:43:07.660890 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 11:43:07.661595 | orchestrator | Sunday 22 June 2025 11:43:07 +0000 (0:00:00.039) 0:06:40.282 *********** 2025-06-22 11:43:07.662892 | orchestrator | 2025-06-22 11:43:07.663564 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-22 11:43:07.664270 | orchestrator | Sunday 22 June 2025 11:43:07 +0000 (0:00:00.038) 0:06:40.321 *********** 2025-06-22 11:43:08.971010 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:43:08.972119 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:43:08.973226 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:43:08.975253 | orchestrator | 2025-06-22 11:43:08.975746 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-06-22 11:43:08.976985 | orchestrator | Sunday 22 June 2025 11:43:08 +0000 (0:00:01.327) 0:06:41.649 *********** 2025-06-22 11:43:10.366113 | orchestrator | changed: [testbed-manager] 2025-06-22 11:43:10.366784 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:43:10.368455 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:43:10.368494 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:43:10.370629 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:43:10.371466 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:43:10.372340 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:43:10.373569 | orchestrator | 2025-06-22 11:43:10.374543 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-06-22 11:43:10.375855 | orchestrator | Sunday 22 June 2025 11:43:10 +0000 (0:00:01.397) 0:06:43.046 *********** 2025-06-22 11:43:11.510744 | orchestrator | changed: [testbed-manager] 2025-06-22 11:43:11.510959 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:43:11.512592 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:43:11.512860 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:43:11.513964 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:43:11.514716 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:43:11.515602 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:43:11.517054 | orchestrator | 2025-06-22 11:43:11.518332 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-06-22 11:43:11.518911 | orchestrator | Sunday 22 June 2025 11:43:11 +0000 (0:00:01.144) 0:06:44.191 *********** 2025-06-22 11:43:11.660873 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:43:13.662322 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:43:13.662538 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:43:13.663387 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:43:13.664536 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:43:13.666370 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:43:13.666917 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:43:13.668179 | orchestrator | 2025-06-22 11:43:13.668522 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-06-22 11:43:13.669292 | orchestrator | Sunday 22 June 2025 11:43:13 +0000 (0:00:02.153) 0:06:46.344 *********** 2025-06-22 11:43:13.767437 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:43:13.767786 | orchestrator | 2025-06-22 11:43:13.768905 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-06-22 11:43:13.770092 | orchestrator | Sunday 22 June 2025 11:43:13 +0000 (0:00:00.104) 0:06:46.449 *********** 2025-06-22 11:43:14.771481 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:14.772551 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:43:14.773611 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:43:14.774424 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:43:14.775123 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:43:14.775871 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:43:14.776627 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:43:14.777216 | orchestrator | 2025-06-22 11:43:14.777920 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-06-22 11:43:14.778565 | orchestrator | Sunday 22 June 2025 11:43:14 +0000 (0:00:01.002) 0:06:47.452 *********** 2025-06-22 11:43:15.133012 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:43:15.202894 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:43:15.275625 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:43:15.344344 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:43:15.408085 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:43:15.556007 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:43:15.556542 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:43:15.557050 | orchestrator | 2025-06-22 11:43:15.557073 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-06-22 11:43:15.558550 | orchestrator | Sunday 22 June 2025 11:43:15 +0000 (0:00:00.784) 0:06:48.236 *********** 2025-06-22 11:43:16.501791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:43:16.502249 | orchestrator | 2025-06-22 11:43:16.502555 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-06-22 11:43:16.503322 | orchestrator | Sunday 22 June 2025 11:43:16 +0000 (0:00:00.946) 0:06:49.182 *********** 2025-06-22 11:43:16.974375 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:17.480485 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:43:17.481008 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:43:17.482498 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:43:17.484893 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:43:17.485539 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:43:17.486385 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:43:17.487208 | orchestrator | 2025-06-22 11:43:17.489334 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-06-22 11:43:17.489893 | orchestrator | Sunday 22 June 2025 11:43:17 +0000 (0:00:00.980) 0:06:50.163 *********** 2025-06-22 11:43:20.123859 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-06-22 11:43:20.125041 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-06-22 11:43:20.128739 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-06-22 11:43:20.130760 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-06-22 11:43:20.132103 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-06-22 11:43:20.133922 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-06-22 11:43:20.134434 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-06-22 11:43:20.135271 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-06-22 11:43:20.136259 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-06-22 11:43:20.136980 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-06-22 11:43:20.137680 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-06-22 11:43:20.138730 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-06-22 11:43:20.139093 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-06-22 11:43:20.139779 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-06-22 11:43:20.140400 | orchestrator | 2025-06-22 11:43:20.141263 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-06-22 11:43:20.144002 | orchestrator | Sunday 22 June 2025 11:43:20 +0000 (0:00:02.640) 0:06:52.804 *********** 2025-06-22 11:43:20.262086 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:43:20.335322 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:43:20.399480 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:43:20.463986 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:43:20.550128 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:43:20.675354 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:43:20.677592 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:43:20.680298 | orchestrator | 2025-06-22 11:43:20.684399 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-06-22 11:43:20.684434 | orchestrator | Sunday 22 June 2025 11:43:20 +0000 (0:00:00.554) 0:06:53.358 *********** 2025-06-22 11:43:21.480012 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:43:21.484147 | orchestrator | 2025-06-22 11:43:21.485915 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-06-22 11:43:21.489184 | orchestrator | Sunday 22 June 2025 11:43:21 +0000 (0:00:00.802) 0:06:54.160 *********** 2025-06-22 11:43:21.996061 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:22.065171 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:43:22.144921 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:43:22.595136 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:43:22.596044 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:43:22.598753 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:43:22.599743 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:43:22.600020 | orchestrator | 2025-06-22 11:43:22.601420 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-06-22 11:43:22.602722 | orchestrator | Sunday 22 June 2025 11:43:22 +0000 (0:00:01.113) 0:06:55.274 *********** 2025-06-22 11:43:22.982228 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:23.476647 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:43:23.476901 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:43:23.477778 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:43:23.478865 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:43:23.479649 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:43:23.480403 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:43:23.481036 | orchestrator | 2025-06-22 11:43:23.481806 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-06-22 11:43:23.482121 | orchestrator | Sunday 22 June 2025 11:43:23 +0000 (0:00:00.883) 0:06:56.157 *********** 2025-06-22 11:43:23.622457 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:43:23.691450 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:43:23.772127 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:43:23.840090 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:43:23.906365 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:43:24.030146 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:43:24.030996 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:43:24.031832 | orchestrator | 2025-06-22 11:43:24.032352 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-06-22 11:43:24.033435 | orchestrator | Sunday 22 June 2025 11:43:24 +0000 (0:00:00.556) 0:06:56.713 *********** 2025-06-22 11:43:25.495280 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:25.496260 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:43:25.497054 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:43:25.497520 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:43:25.499377 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:43:25.499651 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:43:25.500556 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:43:25.501165 | orchestrator | 2025-06-22 11:43:25.501847 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-06-22 11:43:25.503109 | orchestrator | Sunday 22 June 2025 11:43:25 +0000 (0:00:01.462) 0:06:58.176 *********** 2025-06-22 11:43:25.641950 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:43:25.709008 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:43:25.778934 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:43:25.854963 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:43:25.921781 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:43:26.020606 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:43:26.020830 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:43:26.021948 | orchestrator | 2025-06-22 11:43:26.022792 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-06-22 11:43:26.024059 | orchestrator | Sunday 22 June 2025 11:43:26 +0000 (0:00:00.526) 0:06:58.702 *********** 2025-06-22 11:43:33.772175 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:33.773187 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:43:33.773220 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:43:33.773232 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:43:33.779621 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:43:33.780396 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:43:33.780421 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:43:33.782592 | orchestrator | 2025-06-22 11:43:33.782616 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-06-22 11:43:33.782631 | orchestrator | Sunday 22 June 2025 11:43:33 +0000 (0:00:07.750) 0:07:06.453 *********** 2025-06-22 11:43:35.166245 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:35.166522 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:43:35.167324 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:43:35.168329 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:43:35.168727 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:43:35.171100 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:43:35.171729 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:43:35.172738 | orchestrator | 2025-06-22 11:43:35.173485 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-06-22 11:43:35.173966 | orchestrator | Sunday 22 June 2025 11:43:35 +0000 (0:00:01.397) 0:07:07.850 *********** 2025-06-22 11:43:36.901470 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:36.902155 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:43:36.902940 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:43:36.903639 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:43:36.904817 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:43:36.905125 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:43:36.905837 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:43:36.906409 | orchestrator | 2025-06-22 11:43:36.907227 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-06-22 11:43:36.907862 | orchestrator | Sunday 22 June 2025 11:43:36 +0000 (0:00:01.732) 0:07:09.583 *********** 2025-06-22 11:43:38.772983 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:38.774088 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:43:38.775829 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:43:38.777548 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:43:38.777587 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:43:38.778199 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:43:38.781231 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:43:38.781270 | orchestrator | 2025-06-22 11:43:38.782300 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-22 11:43:38.783221 | orchestrator | Sunday 22 June 2025 11:43:38 +0000 (0:00:01.870) 0:07:11.453 *********** 2025-06-22 11:43:39.241796 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:39.670470 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:43:39.671803 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:43:39.672060 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:43:39.676276 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:43:39.676301 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:43:39.676313 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:43:39.676325 | orchestrator | 2025-06-22 11:43:39.677072 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-22 11:43:39.677790 | orchestrator | Sunday 22 June 2025 11:43:39 +0000 (0:00:00.898) 0:07:12.352 *********** 2025-06-22 11:43:39.804462 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:43:39.867649 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:43:39.931114 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:43:39.999431 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:43:40.060400 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:43:40.429827 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:43:40.430323 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:43:40.433220 | orchestrator | 2025-06-22 11:43:40.435343 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-06-22 11:43:40.435370 | orchestrator | Sunday 22 June 2025 11:43:40 +0000 (0:00:00.759) 0:07:13.112 *********** 2025-06-22 11:43:40.567252 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:43:40.638905 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:43:40.705714 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:43:40.763658 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:43:40.834707 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:43:40.934787 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:43:40.935789 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:43:40.939357 | orchestrator | 2025-06-22 11:43:40.939402 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-06-22 11:43:40.939428 | orchestrator | Sunday 22 June 2025 11:43:40 +0000 (0:00:00.504) 0:07:13.617 *********** 2025-06-22 11:43:41.067394 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:41.131800 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:43:41.193902 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:43:41.467424 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:43:41.533255 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:43:41.638307 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:43:41.639856 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:43:41.643307 | orchestrator | 2025-06-22 11:43:41.643333 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-06-22 11:43:41.643348 | orchestrator | Sunday 22 June 2025 11:43:41 +0000 (0:00:00.704) 0:07:14.321 *********** 2025-06-22 11:43:41.777891 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:41.843060 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:43:41.912564 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:43:41.978255 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:43:42.043576 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:43:42.144363 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:43:42.145067 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:43:42.146332 | orchestrator | 2025-06-22 11:43:42.149299 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-06-22 11:43:42.149339 | orchestrator | Sunday 22 June 2025 11:43:42 +0000 (0:00:00.508) 0:07:14.829 *********** 2025-06-22 11:43:42.275226 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:42.347107 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:43:42.415049 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:43:42.485073 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:43:42.555995 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:43:42.653357 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:43:42.654559 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:43:42.657734 | orchestrator | 2025-06-22 11:43:42.657830 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-06-22 11:43:42.657848 | orchestrator | Sunday 22 June 2025 11:43:42 +0000 (0:00:00.505) 0:07:15.335 *********** 2025-06-22 11:43:48.266231 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:48.266802 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:43:48.267901 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:43:48.268381 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:43:48.268812 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:43:48.269344 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:43:48.269792 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:43:48.270346 | orchestrator | 2025-06-22 11:43:48.271294 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-06-22 11:43:48.274630 | orchestrator | Sunday 22 June 2025 11:43:48 +0000 (0:00:05.612) 0:07:20.948 *********** 2025-06-22 11:43:48.408931 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:43:48.477727 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:43:48.551434 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:43:48.645925 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:43:48.734836 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:43:48.861945 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:43:48.862867 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:43:48.864225 | orchestrator | 2025-06-22 11:43:48.865429 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-06-22 11:43:48.866239 | orchestrator | Sunday 22 June 2025 11:43:48 +0000 (0:00:00.596) 0:07:21.545 *********** 2025-06-22 11:43:49.958211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:43:49.958825 | orchestrator | 2025-06-22 11:43:49.959751 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-06-22 11:43:49.961008 | orchestrator | Sunday 22 June 2025 11:43:49 +0000 (0:00:01.096) 0:07:22.641 *********** 2025-06-22 11:43:51.854390 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:43:51.855925 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:51.856814 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:43:51.864032 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:43:51.865706 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:43:51.866691 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:43:51.868260 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:43:51.869075 | orchestrator | 2025-06-22 11:43:51.870368 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-06-22 11:43:51.871212 | orchestrator | Sunday 22 June 2025 11:43:51 +0000 (0:00:01.896) 0:07:24.537 *********** 2025-06-22 11:43:52.980860 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:52.981378 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:43:52.982990 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:43:52.983878 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:43:52.984948 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:43:52.986101 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:43:52.986801 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:43:52.987315 | orchestrator | 2025-06-22 11:43:52.987622 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-06-22 11:43:52.988828 | orchestrator | Sunday 22 June 2025 11:43:52 +0000 (0:00:01.124) 0:07:25.662 *********** 2025-06-22 11:43:53.664592 | orchestrator | ok: [testbed-manager] 2025-06-22 11:43:54.087803 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:43:54.088366 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:43:54.089048 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:43:54.091652 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:43:54.092768 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:43:54.093748 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:43:54.094402 | orchestrator | 2025-06-22 11:43:54.095098 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-06-22 11:43:54.096352 | orchestrator | Sunday 22 June 2025 11:43:54 +0000 (0:00:01.105) 0:07:26.767 *********** 2025-06-22 11:43:55.806996 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 11:43:55.808093 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 11:43:55.810820 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 11:43:55.811009 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 11:43:55.812373 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 11:43:55.813436 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 11:43:55.814822 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 11:43:55.815327 | orchestrator | 2025-06-22 11:43:55.816337 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-06-22 11:43:55.817801 | orchestrator | Sunday 22 June 2025 11:43:55 +0000 (0:00:01.721) 0:07:28.489 *********** 2025-06-22 11:43:56.647370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:43:56.647752 | orchestrator | 2025-06-22 11:43:56.648614 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-06-22 11:43:56.652396 | orchestrator | Sunday 22 June 2025 11:43:56 +0000 (0:00:00.838) 0:07:29.328 *********** 2025-06-22 11:44:05.260675 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:44:05.263636 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:44:05.263689 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:44:05.263701 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:44:05.263713 | orchestrator | changed: [testbed-manager] 2025-06-22 11:44:05.263724 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:44:05.264156 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:44:05.265997 | orchestrator | 2025-06-22 11:44:05.266521 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-06-22 11:44:05.267368 | orchestrator | Sunday 22 June 2025 11:44:05 +0000 (0:00:08.608) 0:07:37.936 *********** 2025-06-22 11:44:07.013271 | orchestrator | ok: [testbed-manager] 2025-06-22 11:44:07.014109 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:44:07.014759 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:44:07.018078 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:44:07.018766 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:44:07.019945 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:44:07.021008 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:44:07.022828 | orchestrator | 2025-06-22 11:44:07.025727 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-06-22 11:44:07.026504 | orchestrator | Sunday 22 June 2025 11:44:07 +0000 (0:00:01.757) 0:07:39.694 *********** 2025-06-22 11:44:08.295227 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:44:08.295333 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:44:08.296585 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:44:08.296610 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:44:08.299068 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:44:08.300455 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:44:08.300487 | orchestrator | 2025-06-22 11:44:08.300863 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-06-22 11:44:08.301447 | orchestrator | Sunday 22 June 2025 11:44:08 +0000 (0:00:01.279) 0:07:40.974 *********** 2025-06-22 11:44:09.840836 | orchestrator | changed: [testbed-manager] 2025-06-22 11:44:09.841052 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:44:09.842894 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:44:09.843910 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:44:09.845508 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:44:09.847130 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:44:09.848158 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:44:09.849055 | orchestrator | 2025-06-22 11:44:09.849783 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-06-22 11:44:09.851849 | orchestrator | 2025-06-22 11:44:09.852497 | orchestrator | TASK [Include hardening role] ************************************************** 2025-06-22 11:44:09.853902 | orchestrator | Sunday 22 June 2025 11:44:09 +0000 (0:00:01.549) 0:07:42.523 *********** 2025-06-22 11:44:09.996840 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:44:10.061674 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:44:10.134190 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:44:10.203715 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:44:10.267720 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:44:10.412139 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:44:10.412838 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:44:10.413835 | orchestrator | 2025-06-22 11:44:10.415774 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-06-22 11:44:10.417103 | orchestrator | 2025-06-22 11:44:10.417938 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-06-22 11:44:10.418791 | orchestrator | Sunday 22 June 2025 11:44:10 +0000 (0:00:00.570) 0:07:43.093 *********** 2025-06-22 11:44:11.783594 | orchestrator | changed: [testbed-manager] 2025-06-22 11:44:11.783911 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:44:11.785111 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:44:11.786117 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:44:11.786688 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:44:11.787274 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:44:11.788157 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:44:11.788812 | orchestrator | 2025-06-22 11:44:11.789100 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-06-22 11:44:11.789483 | orchestrator | Sunday 22 June 2025 11:44:11 +0000 (0:00:01.370) 0:07:44.464 *********** 2025-06-22 11:44:13.531117 | orchestrator | ok: [testbed-manager] 2025-06-22 11:44:13.531790 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:44:13.533901 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:44:13.534001 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:44:13.535913 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:44:13.536220 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:44:13.536494 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:44:13.537083 | orchestrator | 2025-06-22 11:44:13.537615 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-06-22 11:44:13.538011 | orchestrator | Sunday 22 June 2025 11:44:13 +0000 (0:00:01.746) 0:07:46.211 *********** 2025-06-22 11:44:13.660289 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:44:13.729477 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:44:13.808236 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:44:13.870485 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:44:13.937301 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:44:14.328190 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:44:14.329451 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:44:14.330381 | orchestrator | 2025-06-22 11:44:14.331426 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-06-22 11:44:14.332547 | orchestrator | Sunday 22 June 2025 11:44:14 +0000 (0:00:00.798) 0:07:47.010 *********** 2025-06-22 11:44:15.598137 | orchestrator | changed: [testbed-manager] 2025-06-22 11:44:15.600335 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:44:15.602872 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:44:15.603395 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:44:15.606749 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:44:15.608201 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:44:15.609264 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:44:15.610869 | orchestrator | 2025-06-22 11:44:15.610915 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-06-22 11:44:15.611987 | orchestrator | 2025-06-22 11:44:15.612991 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-06-22 11:44:15.613284 | orchestrator | Sunday 22 June 2025 11:44:15 +0000 (0:00:01.268) 0:07:48.278 *********** 2025-06-22 11:44:16.604012 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:44:16.604308 | orchestrator | 2025-06-22 11:44:16.605573 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-22 11:44:16.605810 | orchestrator | Sunday 22 June 2025 11:44:16 +0000 (0:00:01.007) 0:07:49.285 *********** 2025-06-22 11:44:17.459911 | orchestrator | ok: [testbed-manager] 2025-06-22 11:44:17.460181 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:44:17.460211 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:44:17.460937 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:44:17.461081 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:44:17.461181 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:44:17.461813 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:44:17.462232 | orchestrator | 2025-06-22 11:44:17.462532 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-22 11:44:17.462812 | orchestrator | Sunday 22 June 2025 11:44:17 +0000 (0:00:00.857) 0:07:50.144 *********** 2025-06-22 11:44:18.568800 | orchestrator | changed: [testbed-manager] 2025-06-22 11:44:18.568932 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:44:18.569991 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:44:18.573049 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:44:18.573074 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:44:18.573087 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:44:18.573098 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:44:18.573781 | orchestrator | 2025-06-22 11:44:18.573810 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-06-22 11:44:18.574488 | orchestrator | Sunday 22 June 2025 11:44:18 +0000 (0:00:01.104) 0:07:51.248 *********** 2025-06-22 11:44:19.573466 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:44:19.574296 | orchestrator | 2025-06-22 11:44:19.574804 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-22 11:44:19.575956 | orchestrator | Sunday 22 June 2025 11:44:19 +0000 (0:00:01.006) 0:07:52.254 *********** 2025-06-22 11:44:19.984752 | orchestrator | ok: [testbed-manager] 2025-06-22 11:44:20.400565 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:44:20.401894 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:44:20.402561 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:44:20.403682 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:44:20.404461 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:44:20.405348 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:44:20.405995 | orchestrator | 2025-06-22 11:44:20.406773 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-22 11:44:20.408716 | orchestrator | Sunday 22 June 2025 11:44:20 +0000 (0:00:00.826) 0:07:53.081 *********** 2025-06-22 11:44:20.838907 | orchestrator | changed: [testbed-manager] 2025-06-22 11:44:21.553796 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:44:21.554254 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:44:21.555454 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:44:21.556584 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:44:21.557592 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:44:21.558148 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:44:21.559108 | orchestrator | 2025-06-22 11:44:21.560831 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:44:21.560875 | orchestrator | 2025-06-22 11:44:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:44:21.560890 | orchestrator | 2025-06-22 11:44:21 | INFO  | Please wait and do not abort execution. 2025-06-22 11:44:21.561571 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-06-22 11:44:21.564927 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 11:44:21.564986 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 11:44:21.564999 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 11:44:21.566651 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-06-22 11:44:21.568492 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 11:44:21.574370 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 11:44:21.575313 | orchestrator | 2025-06-22 11:44:21.575343 | orchestrator | 2025-06-22 11:44:21.575596 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:44:21.576037 | orchestrator | Sunday 22 June 2025 11:44:21 +0000 (0:00:01.154) 0:07:54.236 *********** 2025-06-22 11:44:21.576384 | orchestrator | =============================================================================== 2025-06-22 11:44:21.577147 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.53s 2025-06-22 11:44:21.577838 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.39s 2025-06-22 11:44:21.577912 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.59s 2025-06-22 11:44:21.578145 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.73s 2025-06-22 11:44:21.578675 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.61s 2025-06-22 11:44:21.579272 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.56s 2025-06-22 11:44:21.579689 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.34s 2025-06-22 11:44:21.580005 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.66s 2025-06-22 11:44:21.580305 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.61s 2025-06-22 11:44:21.580615 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.49s 2025-06-22 11:44:21.580907 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.48s 2025-06-22 11:44:21.581196 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.06s 2025-06-22 11:44:21.581555 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.90s 2025-06-22 11:44:21.581794 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.75s 2025-06-22 11:44:21.582088 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.53s 2025-06-22 11:44:21.582533 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.33s 2025-06-22 11:44:21.582562 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.71s 2025-06-22 11:44:21.582784 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.13s 2025-06-22 11:44:21.583158 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.86s 2025-06-22 11:44:21.583336 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.78s 2025-06-22 11:44:22.264427 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-22 11:44:22.264534 | orchestrator | + osism apply network 2025-06-22 11:44:24.442778 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:44:24.442883 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:44:24.442898 | orchestrator | Registering Redlock._release_script 2025-06-22 11:44:24.508079 | orchestrator | 2025-06-22 11:44:24 | INFO  | Task a5d2634c-1c2b-4130-99a5-0ecae5b02aff (network) was prepared for execution. 2025-06-22 11:44:24.508223 | orchestrator | 2025-06-22 11:44:24 | INFO  | It takes a moment until task a5d2634c-1c2b-4130-99a5-0ecae5b02aff (network) has been started and output is visible here. 2025-06-22 11:44:28.767158 | orchestrator | 2025-06-22 11:44:28.768224 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-06-22 11:44:28.769177 | orchestrator | 2025-06-22 11:44:28.770249 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-06-22 11:44:28.771370 | orchestrator | Sunday 22 June 2025 11:44:28 +0000 (0:00:00.288) 0:00:00.288 *********** 2025-06-22 11:44:28.922453 | orchestrator | ok: [testbed-manager] 2025-06-22 11:44:29.001239 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:44:29.087504 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:44:29.166123 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:44:29.389491 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:44:29.532245 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:44:29.533453 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:44:29.535961 | orchestrator | 2025-06-22 11:44:29.535991 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-06-22 11:44:29.536380 | orchestrator | Sunday 22 June 2025 11:44:29 +0000 (0:00:00.762) 0:00:01.050 *********** 2025-06-22 11:44:30.771737 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 11:44:30.773510 | orchestrator | 2025-06-22 11:44:30.776290 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-06-22 11:44:30.776378 | orchestrator | Sunday 22 June 2025 11:44:30 +0000 (0:00:01.239) 0:00:02.290 *********** 2025-06-22 11:44:32.703859 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:44:32.704057 | orchestrator | ok: [testbed-manager] 2025-06-22 11:44:32.704697 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:44:32.705445 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:44:32.706151 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:44:32.707715 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:44:32.708702 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:44:32.709867 | orchestrator | 2025-06-22 11:44:32.711577 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-06-22 11:44:32.712426 | orchestrator | Sunday 22 June 2025 11:44:32 +0000 (0:00:01.932) 0:00:04.222 *********** 2025-06-22 11:44:34.599472 | orchestrator | ok: [testbed-manager] 2025-06-22 11:44:34.599745 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:44:34.603671 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:44:34.603700 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:44:34.603713 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:44:34.604240 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:44:34.605810 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:44:34.607308 | orchestrator | 2025-06-22 11:44:34.608688 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-06-22 11:44:34.609704 | orchestrator | Sunday 22 June 2025 11:44:34 +0000 (0:00:01.892) 0:00:06.115 *********** 2025-06-22 11:44:35.193240 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-06-22 11:44:35.698997 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-06-22 11:44:35.699315 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-06-22 11:44:35.700839 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-06-22 11:44:35.701872 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-06-22 11:44:35.704399 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-06-22 11:44:35.704424 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-06-22 11:44:35.704437 | orchestrator | 2025-06-22 11:44:35.705173 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-06-22 11:44:35.705963 | orchestrator | Sunday 22 June 2025 11:44:35 +0000 (0:00:01.105) 0:00:07.220 *********** 2025-06-22 11:44:39.081257 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 11:44:39.081669 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 11:44:39.082252 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-22 11:44:39.082786 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 11:44:39.083489 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 11:44:39.084114 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-22 11:44:39.084283 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 11:44:39.085322 | orchestrator | 2025-06-22 11:44:39.085555 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-06-22 11:44:39.087128 | orchestrator | Sunday 22 June 2025 11:44:39 +0000 (0:00:03.379) 0:00:10.600 *********** 2025-06-22 11:44:40.562791 | orchestrator | changed: [testbed-manager] 2025-06-22 11:44:40.563131 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:44:40.565208 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:44:40.566420 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:44:40.567520 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:44:40.568100 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:44:40.569201 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:44:40.570559 | orchestrator | 2025-06-22 11:44:40.571023 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-06-22 11:44:40.572153 | orchestrator | Sunday 22 June 2025 11:44:40 +0000 (0:00:01.483) 0:00:12.084 *********** 2025-06-22 11:44:42.461113 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 11:44:42.461286 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-22 11:44:42.462304 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-22 11:44:42.464190 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 11:44:42.464814 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 11:44:42.466008 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 11:44:42.466493 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 11:44:42.467232 | orchestrator | 2025-06-22 11:44:42.467851 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-06-22 11:44:42.468303 | orchestrator | Sunday 22 June 2025 11:44:42 +0000 (0:00:01.898) 0:00:13.982 *********** 2025-06-22 11:44:42.875811 | orchestrator | ok: [testbed-manager] 2025-06-22 11:44:43.166242 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:44:43.594075 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:44:43.594239 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:44:43.598135 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:44:43.598170 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:44:43.598182 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:44:43.598193 | orchestrator | 2025-06-22 11:44:43.598263 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-06-22 11:44:43.598713 | orchestrator | Sunday 22 June 2025 11:44:43 +0000 (0:00:01.127) 0:00:15.109 *********** 2025-06-22 11:44:43.765854 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:44:43.854993 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:44:43.941446 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:44:44.028063 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:44:44.115914 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:44:44.271813 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:44:44.273325 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:44:44.274199 | orchestrator | 2025-06-22 11:44:44.275090 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-06-22 11:44:44.278180 | orchestrator | Sunday 22 June 2025 11:44:44 +0000 (0:00:00.685) 0:00:15.795 *********** 2025-06-22 11:44:46.519544 | orchestrator | ok: [testbed-manager] 2025-06-22 11:44:46.520708 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:44:46.523266 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:44:46.523305 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:44:46.524679 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:44:46.525411 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:44:46.526089 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:44:46.527562 | orchestrator | 2025-06-22 11:44:46.528405 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-06-22 11:44:46.529269 | orchestrator | Sunday 22 June 2025 11:44:46 +0000 (0:00:02.242) 0:00:18.037 *********** 2025-06-22 11:44:46.777384 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:44:46.859082 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:44:46.945305 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:44:47.041745 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:44:47.447095 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:44:47.449830 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:44:47.451138 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-06-22 11:44:47.452380 | orchestrator | 2025-06-22 11:44:47.453588 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-06-22 11:44:47.454636 | orchestrator | Sunday 22 June 2025 11:44:47 +0000 (0:00:00.932) 0:00:18.969 *********** 2025-06-22 11:44:49.091553 | orchestrator | ok: [testbed-manager] 2025-06-22 11:44:49.091823 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:44:49.095154 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:44:49.095202 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:44:49.095222 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:44:49.096115 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:44:49.097224 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:44:49.098322 | orchestrator | 2025-06-22 11:44:49.100028 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-06-22 11:44:49.100053 | orchestrator | Sunday 22 June 2025 11:44:49 +0000 (0:00:01.636) 0:00:20.605 *********** 2025-06-22 11:44:50.400788 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 11:44:50.401727 | orchestrator | 2025-06-22 11:44:50.402595 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-22 11:44:50.405176 | orchestrator | Sunday 22 June 2025 11:44:50 +0000 (0:00:01.314) 0:00:21.919 *********** 2025-06-22 11:44:51.594777 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:44:51.594880 | orchestrator | ok: [testbed-manager] 2025-06-22 11:44:51.595702 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:44:51.596550 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:44:51.597822 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:44:51.599579 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:44:51.600340 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:44:51.601006 | orchestrator | 2025-06-22 11:44:51.601642 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-06-22 11:44:51.602367 | orchestrator | Sunday 22 June 2025 11:44:51 +0000 (0:00:01.193) 0:00:23.113 *********** 2025-06-22 11:44:51.768421 | orchestrator | ok: [testbed-manager] 2025-06-22 11:44:51.858600 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:44:51.945934 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:44:52.030972 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:44:52.120420 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:44:52.271956 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:44:52.272093 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:44:52.272226 | orchestrator | 2025-06-22 11:44:52.273159 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-22 11:44:52.273683 | orchestrator | Sunday 22 June 2025 11:44:52 +0000 (0:00:00.667) 0:00:23.780 *********** 2025-06-22 11:44:52.620441 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 11:44:52.620592 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 11:44:53.097605 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 11:44:53.098200 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 11:44:53.099221 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 11:44:53.100219 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 11:44:53.101158 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 11:44:53.101787 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 11:44:53.102706 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 11:44:53.106153 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 11:44:53.106193 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 11:44:53.106205 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 11:44:53.558438 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 11:44:53.558751 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 11:44:53.562557 | orchestrator | 2025-06-22 11:44:53.562639 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-06-22 11:44:53.562662 | orchestrator | Sunday 22 June 2025 11:44:53 +0000 (0:00:01.294) 0:00:25.075 *********** 2025-06-22 11:44:53.729682 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:44:53.819512 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:44:53.910115 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:44:53.991539 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:44:54.075707 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:44:54.201563 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:44:54.202160 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:44:54.202831 | orchestrator | 2025-06-22 11:44:54.207202 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-06-22 11:44:54.207240 | orchestrator | Sunday 22 June 2025 11:44:54 +0000 (0:00:00.649) 0:00:25.724 *********** 2025-06-22 11:44:58.485800 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-0, testbed-manager, testbed-node-2, testbed-node-4, testbed-node-5, testbed-node-3 2025-06-22 11:44:58.486247 | orchestrator | 2025-06-22 11:44:58.488549 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-06-22 11:44:58.488742 | orchestrator | Sunday 22 June 2025 11:44:58 +0000 (0:00:04.280) 0:00:30.004 *********** 2025-06-22 11:45:03.579103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-22 11:45:03.582067 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-22 11:45:03.582104 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-22 11:45:03.583129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-22 11:45:03.584402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-22 11:45:03.585218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-22 11:45:03.586071 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-22 11:45:03.586565 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-22 11:45:03.587440 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-22 11:45:03.588024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-22 11:45:03.588642 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-22 11:45:03.589155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-22 11:45:03.589571 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-22 11:45:03.590081 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-22 11:45:03.590460 | orchestrator | 2025-06-22 11:45:03.590980 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-06-22 11:45:03.591321 | orchestrator | Sunday 22 June 2025 11:45:03 +0000 (0:00:05.094) 0:00:35.099 *********** 2025-06-22 11:45:09.165333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-22 11:45:09.165428 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-22 11:45:09.165446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-22 11:45:09.165521 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-22 11:45:09.166561 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-22 11:45:09.167336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-22 11:45:09.168212 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-22 11:45:09.168861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-22 11:45:09.169430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-22 11:45:09.170339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-22 11:45:09.170880 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-22 11:45:09.171432 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-22 11:45:09.172011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-22 11:45:09.172676 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-22 11:45:09.173233 | orchestrator | 2025-06-22 11:45:09.173786 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-06-22 11:45:09.174270 | orchestrator | Sunday 22 June 2025 11:45:09 +0000 (0:00:05.585) 0:00:40.684 *********** 2025-06-22 11:45:10.322687 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 11:45:10.324515 | orchestrator | 2025-06-22 11:45:10.324550 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-22 11:45:10.325932 | orchestrator | Sunday 22 June 2025 11:45:10 +0000 (0:00:01.158) 0:00:41.843 *********** 2025-06-22 11:45:10.690152 | orchestrator | ok: [testbed-manager] 2025-06-22 11:45:10.929435 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:45:11.365400 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:45:11.365481 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:45:11.366539 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:45:11.367538 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:45:11.368726 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:45:11.369879 | orchestrator | 2025-06-22 11:45:11.370262 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-22 11:45:11.371091 | orchestrator | Sunday 22 June 2025 11:45:11 +0000 (0:00:01.041) 0:00:42.884 *********** 2025-06-22 11:45:11.441677 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 11:45:11.442697 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 11:45:11.443431 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 11:45:11.527274 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 11:45:11.527414 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 11:45:11.528520 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 11:45:11.529142 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 11:45:11.529797 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 11:45:11.624375 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:45:11.624715 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 11:45:11.626080 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 11:45:11.629072 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 11:45:11.629112 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 11:45:11.708138 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:45:11.709739 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 11:45:11.710001 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 11:45:11.712096 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 11:45:11.814827 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 11:45:11.814907 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:45:11.816258 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 11:45:11.819260 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 11:45:11.819287 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 11:45:11.819297 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 11:45:12.024039 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:45:12.024577 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 11:45:12.025719 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 11:45:12.029041 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 11:45:12.029079 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 11:45:13.296796 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:45:13.297325 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:45:13.301190 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 11:45:13.301924 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 11:45:13.302465 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 11:45:13.303145 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 11:45:13.304389 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:45:13.305751 | orchestrator | 2025-06-22 11:45:13.306861 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-06-22 11:45:13.307473 | orchestrator | Sunday 22 June 2025 11:45:13 +0000 (0:00:01.930) 0:00:44.815 *********** 2025-06-22 11:45:13.497577 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:45:13.578619 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:45:13.661709 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:45:13.748265 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:45:13.832545 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:45:13.942776 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:45:13.944071 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:45:13.944466 | orchestrator | 2025-06-22 11:45:13.945839 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-06-22 11:45:13.948224 | orchestrator | Sunday 22 June 2025 11:45:13 +0000 (0:00:00.650) 0:00:45.466 *********** 2025-06-22 11:45:14.122100 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:45:14.205071 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:45:14.464183 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:45:14.550854 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:45:14.637005 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:45:14.684796 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:45:14.684988 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:45:14.686700 | orchestrator | 2025-06-22 11:45:14.687911 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:45:14.688426 | orchestrator | 2025-06-22 11:45:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:45:14.688799 | orchestrator | 2025-06-22 11:45:14 | INFO  | Please wait and do not abort execution. 2025-06-22 11:45:14.690113 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 11:45:14.690581 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 11:45:14.691469 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 11:45:14.692212 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 11:45:14.692974 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 11:45:14.693824 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 11:45:14.694358 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 11:45:14.694986 | orchestrator | 2025-06-22 11:45:14.695703 | orchestrator | 2025-06-22 11:45:14.696505 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:45:14.697295 | orchestrator | Sunday 22 June 2025 11:45:14 +0000 (0:00:00.740) 0:00:46.207 *********** 2025-06-22 11:45:14.698146 | orchestrator | =============================================================================== 2025-06-22 11:45:14.698586 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.59s 2025-06-22 11:45:14.699371 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.09s 2025-06-22 11:45:14.700085 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.28s 2025-06-22 11:45:14.700767 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.38s 2025-06-22 11:45:14.701378 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.24s 2025-06-22 11:45:14.702119 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.93s 2025-06-22 11:45:14.702611 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.93s 2025-06-22 11:45:14.703701 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.90s 2025-06-22 11:45:14.704603 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.89s 2025-06-22 11:45:14.705326 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.64s 2025-06-22 11:45:14.706142 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.48s 2025-06-22 11:45:14.706904 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.31s 2025-06-22 11:45:14.707751 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.29s 2025-06-22 11:45:14.708604 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.24s 2025-06-22 11:45:14.709382 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.19s 2025-06-22 11:45:14.710113 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.16s 2025-06-22 11:45:14.711767 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.13s 2025-06-22 11:45:14.712728 | orchestrator | osism.commons.network : Create required directories --------------------- 1.11s 2025-06-22 11:45:14.713182 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.04s 2025-06-22 11:45:14.713808 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.93s 2025-06-22 11:45:15.429112 | orchestrator | + osism apply wireguard 2025-06-22 11:45:17.232397 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:45:17.232502 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:45:17.232517 | orchestrator | Registering Redlock._release_script 2025-06-22 11:45:17.305203 | orchestrator | 2025-06-22 11:45:17 | INFO  | Task 638f0ab1-b6ea-4fd4-a43b-80282b72b7c6 (wireguard) was prepared for execution. 2025-06-22 11:45:17.305293 | orchestrator | 2025-06-22 11:45:17 | INFO  | It takes a moment until task 638f0ab1-b6ea-4fd4-a43b-80282b72b7c6 (wireguard) has been started and output is visible here. 2025-06-22 11:45:21.958969 | orchestrator | 2025-06-22 11:45:21.960062 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-06-22 11:45:21.961372 | orchestrator | 2025-06-22 11:45:21.962294 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-06-22 11:45:21.964446 | orchestrator | Sunday 22 June 2025 11:45:21 +0000 (0:00:00.246) 0:00:00.246 *********** 2025-06-22 11:45:23.524081 | orchestrator | ok: [testbed-manager] 2025-06-22 11:45:23.525624 | orchestrator | 2025-06-22 11:45:23.526408 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-06-22 11:45:23.527325 | orchestrator | Sunday 22 June 2025 11:45:23 +0000 (0:00:01.567) 0:00:01.814 *********** 2025-06-22 11:45:30.400605 | orchestrator | changed: [testbed-manager] 2025-06-22 11:45:30.401358 | orchestrator | 2025-06-22 11:45:30.401705 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-06-22 11:45:30.403151 | orchestrator | Sunday 22 June 2025 11:45:30 +0000 (0:00:06.874) 0:00:08.688 *********** 2025-06-22 11:45:30.951554 | orchestrator | changed: [testbed-manager] 2025-06-22 11:45:30.951787 | orchestrator | 2025-06-22 11:45:30.954811 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-06-22 11:45:30.955066 | orchestrator | Sunday 22 June 2025 11:45:30 +0000 (0:00:00.551) 0:00:09.240 *********** 2025-06-22 11:45:31.384301 | orchestrator | changed: [testbed-manager] 2025-06-22 11:45:31.384467 | orchestrator | 2025-06-22 11:45:31.384940 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-06-22 11:45:31.385394 | orchestrator | Sunday 22 June 2025 11:45:31 +0000 (0:00:00.434) 0:00:09.675 *********** 2025-06-22 11:45:31.918771 | orchestrator | ok: [testbed-manager] 2025-06-22 11:45:31.919427 | orchestrator | 2025-06-22 11:45:31.920552 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-06-22 11:45:31.924602 | orchestrator | Sunday 22 June 2025 11:45:31 +0000 (0:00:00.533) 0:00:10.209 *********** 2025-06-22 11:45:32.443887 | orchestrator | ok: [testbed-manager] 2025-06-22 11:45:32.443959 | orchestrator | 2025-06-22 11:45:32.444475 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-06-22 11:45:32.445076 | orchestrator | Sunday 22 June 2025 11:45:32 +0000 (0:00:00.526) 0:00:10.735 *********** 2025-06-22 11:45:32.868783 | orchestrator | ok: [testbed-manager] 2025-06-22 11:45:32.868942 | orchestrator | 2025-06-22 11:45:32.870091 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-06-22 11:45:32.870747 | orchestrator | Sunday 22 June 2025 11:45:32 +0000 (0:00:00.422) 0:00:11.158 *********** 2025-06-22 11:45:34.090489 | orchestrator | changed: [testbed-manager] 2025-06-22 11:45:34.090750 | orchestrator | 2025-06-22 11:45:34.091877 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-06-22 11:45:34.092746 | orchestrator | Sunday 22 June 2025 11:45:34 +0000 (0:00:01.221) 0:00:12.380 *********** 2025-06-22 11:45:35.043253 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 11:45:35.043333 | orchestrator | changed: [testbed-manager] 2025-06-22 11:45:35.043342 | orchestrator | 2025-06-22 11:45:35.043349 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-06-22 11:45:35.043355 | orchestrator | Sunday 22 June 2025 11:45:35 +0000 (0:00:00.945) 0:00:13.325 *********** 2025-06-22 11:45:36.864100 | orchestrator | changed: [testbed-manager] 2025-06-22 11:45:36.864729 | orchestrator | 2025-06-22 11:45:36.866257 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-06-22 11:45:36.866533 | orchestrator | Sunday 22 June 2025 11:45:36 +0000 (0:00:01.828) 0:00:15.154 *********** 2025-06-22 11:45:37.822864 | orchestrator | changed: [testbed-manager] 2025-06-22 11:45:37.823430 | orchestrator | 2025-06-22 11:45:37.823807 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:45:37.824476 | orchestrator | 2025-06-22 11:45:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:45:37.824503 | orchestrator | 2025-06-22 11:45:37 | INFO  | Please wait and do not abort execution. 2025-06-22 11:45:37.824814 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:45:37.826123 | orchestrator | 2025-06-22 11:45:37.826939 | orchestrator | 2025-06-22 11:45:37.827757 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:45:37.828490 | orchestrator | Sunday 22 June 2025 11:45:37 +0000 (0:00:00.960) 0:00:16.114 *********** 2025-06-22 11:45:37.829124 | orchestrator | =============================================================================== 2025-06-22 11:45:37.829884 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.87s 2025-06-22 11:45:37.830433 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.83s 2025-06-22 11:45:37.830886 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.57s 2025-06-22 11:45:37.831346 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.22s 2025-06-22 11:45:37.832088 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.96s 2025-06-22 11:45:37.832490 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.95s 2025-06-22 11:45:37.832946 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2025-06-22 11:45:37.833128 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2025-06-22 11:45:37.833543 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.53s 2025-06-22 11:45:37.833837 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2025-06-22 11:45:37.834108 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-06-22 11:45:38.464549 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-06-22 11:45:38.502520 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-06-22 11:45:38.502635 | orchestrator | Dload Upload Total Spent Left Speed 2025-06-22 11:45:38.592369 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 167 0 --:--:-- --:--:-- --:--:-- 166 100 15 100 15 0 0 167 0 --:--:-- --:--:-- --:--:-- 166 2025-06-22 11:45:38.609715 | orchestrator | + osism apply --environment custom workarounds 2025-06-22 11:45:40.295571 | orchestrator | 2025-06-22 11:45:40 | INFO  | Trying to run play workarounds in environment custom 2025-06-22 11:45:40.300918 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:45:40.301015 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:45:40.301047 | orchestrator | Registering Redlock._release_script 2025-06-22 11:45:40.361368 | orchestrator | 2025-06-22 11:45:40 | INFO  | Task 7b9fea2c-74f4-4a7b-b9ba-9865da16b8de (workarounds) was prepared for execution. 2025-06-22 11:45:40.361485 | orchestrator | 2025-06-22 11:45:40 | INFO  | It takes a moment until task 7b9fea2c-74f4-4a7b-b9ba-9865da16b8de (workarounds) has been started and output is visible here. 2025-06-22 11:45:44.112755 | orchestrator | 2025-06-22 11:45:44.113498 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 11:45:44.115320 | orchestrator | 2025-06-22 11:45:44.117490 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-06-22 11:45:44.119044 | orchestrator | Sunday 22 June 2025 11:45:44 +0000 (0:00:00.109) 0:00:00.109 *********** 2025-06-22 11:45:44.241764 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-06-22 11:45:44.306310 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-06-22 11:45:44.369625 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-06-22 11:45:44.433179 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-06-22 11:45:44.566190 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-06-22 11:45:44.721628 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-06-22 11:45:44.722338 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-06-22 11:45:44.723361 | orchestrator | 2025-06-22 11:45:44.723936 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-06-22 11:45:44.724561 | orchestrator | 2025-06-22 11:45:44.725267 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-22 11:45:44.725862 | orchestrator | Sunday 22 June 2025 11:45:44 +0000 (0:00:00.612) 0:00:00.721 *********** 2025-06-22 11:45:46.883385 | orchestrator | ok: [testbed-manager] 2025-06-22 11:45:46.883481 | orchestrator | 2025-06-22 11:45:46.883498 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-06-22 11:45:46.884229 | orchestrator | 2025-06-22 11:45:46.884822 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-22 11:45:46.885804 | orchestrator | Sunday 22 June 2025 11:45:46 +0000 (0:00:02.156) 0:00:02.877 *********** 2025-06-22 11:45:48.714450 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:45:48.715356 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:45:48.716608 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:45:48.717862 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:45:48.718558 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:45:48.719353 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:45:48.719951 | orchestrator | 2025-06-22 11:45:48.720706 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-06-22 11:45:48.721279 | orchestrator | 2025-06-22 11:45:48.721967 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-06-22 11:45:48.722415 | orchestrator | Sunday 22 June 2025 11:45:48 +0000 (0:00:01.832) 0:00:04.710 *********** 2025-06-22 11:45:50.139261 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 11:45:50.139665 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 11:45:50.140902 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 11:45:50.144050 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 11:45:50.144599 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 11:45:50.145576 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 11:45:50.146294 | orchestrator | 2025-06-22 11:45:50.147212 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-06-22 11:45:50.147879 | orchestrator | Sunday 22 June 2025 11:45:50 +0000 (0:00:01.423) 0:00:06.134 *********** 2025-06-22 11:45:53.932069 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:45:53.934425 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:45:53.934471 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:45:53.935353 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:45:53.936796 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:45:53.937529 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:45:53.938808 | orchestrator | 2025-06-22 11:45:53.939814 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-06-22 11:45:53.940124 | orchestrator | Sunday 22 June 2025 11:45:53 +0000 (0:00:03.794) 0:00:09.928 *********** 2025-06-22 11:45:54.101989 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:45:54.180979 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:45:54.261356 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:45:54.338292 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:45:54.660299 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:45:54.662169 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:45:54.664424 | orchestrator | 2025-06-22 11:45:54.664834 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-06-22 11:45:54.666454 | orchestrator | 2025-06-22 11:45:54.668136 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-06-22 11:45:54.669549 | orchestrator | Sunday 22 June 2025 11:45:54 +0000 (0:00:00.728) 0:00:10.656 *********** 2025-06-22 11:45:56.366994 | orchestrator | changed: [testbed-manager] 2025-06-22 11:45:56.369735 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:45:56.369801 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:45:56.369817 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:45:56.370964 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:45:56.372996 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:45:56.374062 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:45:56.374670 | orchestrator | 2025-06-22 11:45:56.376274 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-06-22 11:45:56.376298 | orchestrator | Sunday 22 June 2025 11:45:56 +0000 (0:00:01.705) 0:00:12.362 *********** 2025-06-22 11:45:58.077278 | orchestrator | changed: [testbed-manager] 2025-06-22 11:45:58.080900 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:45:58.080989 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:45:58.081047 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:45:58.082056 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:45:58.083805 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:45:58.085401 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:45:58.085990 | orchestrator | 2025-06-22 11:45:58.088090 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-06-22 11:45:58.088139 | orchestrator | Sunday 22 June 2025 11:45:58 +0000 (0:00:01.704) 0:00:14.066 *********** 2025-06-22 11:45:59.596924 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:45:59.599425 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:45:59.605488 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:45:59.605529 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:45:59.605635 | orchestrator | ok: [testbed-manager] 2025-06-22 11:45:59.607099 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:45:59.607795 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:45:59.608608 | orchestrator | 2025-06-22 11:45:59.609736 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-06-22 11:45:59.610203 | orchestrator | Sunday 22 June 2025 11:45:59 +0000 (0:00:01.527) 0:00:15.593 *********** 2025-06-22 11:46:01.362170 | orchestrator | changed: [testbed-manager] 2025-06-22 11:46:01.362812 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:46:01.363007 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:46:01.363777 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:46:01.364552 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:46:01.365018 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:46:01.366692 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:46:01.366914 | orchestrator | 2025-06-22 11:46:01.369127 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-06-22 11:46:01.369152 | orchestrator | Sunday 22 June 2025 11:46:01 +0000 (0:00:01.762) 0:00:17.356 *********** 2025-06-22 11:46:01.535961 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:46:01.616023 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:46:01.697276 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:46:01.773164 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:46:01.847206 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:46:01.988276 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:46:01.988370 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:46:01.988384 | orchestrator | 2025-06-22 11:46:01.989988 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-06-22 11:46:01.990065 | orchestrator | 2025-06-22 11:46:01.990081 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-06-22 11:46:01.990093 | orchestrator | Sunday 22 June 2025 11:46:01 +0000 (0:00:00.627) 0:00:17.984 *********** 2025-06-22 11:46:04.587285 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:46:04.587392 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:46:04.587951 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:46:04.589041 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:46:04.590452 | orchestrator | ok: [testbed-manager] 2025-06-22 11:46:04.591635 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:46:04.591756 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:46:04.593062 | orchestrator | 2025-06-22 11:46:04.594101 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:46:04.595178 | orchestrator | 2025-06-22 11:46:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:46:04.595414 | orchestrator | 2025-06-22 11:46:04 | INFO  | Please wait and do not abort execution. 2025-06-22 11:46:04.598184 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 11:46:04.598988 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:46:04.599569 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:46:04.600145 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:46:04.600549 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:46:04.601408 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:46:04.601771 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:46:04.601979 | orchestrator | 2025-06-22 11:46:04.603197 | orchestrator | 2025-06-22 11:46:04.603497 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:46:04.604381 | orchestrator | Sunday 22 June 2025 11:46:04 +0000 (0:00:02.599) 0:00:20.583 *********** 2025-06-22 11:46:04.605042 | orchestrator | =============================================================================== 2025-06-22 11:46:04.605418 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.79s 2025-06-22 11:46:04.605833 | orchestrator | Install python3-docker -------------------------------------------------- 2.60s 2025-06-22 11:46:04.606249 | orchestrator | Apply netplan configuration --------------------------------------------- 2.16s 2025-06-22 11:46:04.606891 | orchestrator | Apply netplan configuration --------------------------------------------- 1.83s 2025-06-22 11:46:04.608199 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.76s 2025-06-22 11:46:04.609035 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.71s 2025-06-22 11:46:04.609610 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.70s 2025-06-22 11:46:04.609898 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.53s 2025-06-22 11:46:04.610295 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.42s 2025-06-22 11:46:04.610644 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.73s 2025-06-22 11:46:04.611418 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2025-06-22 11:46:04.611480 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.61s 2025-06-22 11:46:05.268448 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-06-22 11:46:06.952670 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:46:06.952778 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:46:06.952929 | orchestrator | Registering Redlock._release_script 2025-06-22 11:46:07.011616 | orchestrator | 2025-06-22 11:46:07 | INFO  | Task 582eaa46-8e20-40a9-a1f6-24cbd4549221 (reboot) was prepared for execution. 2025-06-22 11:46:07.011735 | orchestrator | 2025-06-22 11:46:07 | INFO  | It takes a moment until task 582eaa46-8e20-40a9-a1f6-24cbd4549221 (reboot) has been started and output is visible here. 2025-06-22 11:46:10.923017 | orchestrator | 2025-06-22 11:46:10.923399 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 11:46:10.925511 | orchestrator | 2025-06-22 11:46:10.925873 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 11:46:10.926517 | orchestrator | Sunday 22 June 2025 11:46:10 +0000 (0:00:00.156) 0:00:00.156 *********** 2025-06-22 11:46:11.022457 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:46:11.022603 | orchestrator | 2025-06-22 11:46:11.024188 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 11:46:11.024235 | orchestrator | Sunday 22 June 2025 11:46:11 +0000 (0:00:00.102) 0:00:00.259 *********** 2025-06-22 11:46:11.969041 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:46:11.969219 | orchestrator | 2025-06-22 11:46:11.969792 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 11:46:11.970335 | orchestrator | Sunday 22 June 2025 11:46:11 +0000 (0:00:00.940) 0:00:01.199 *********** 2025-06-22 11:46:12.070940 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:46:12.071017 | orchestrator | 2025-06-22 11:46:12.071031 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 11:46:12.071043 | orchestrator | 2025-06-22 11:46:12.071104 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 11:46:12.071289 | orchestrator | Sunday 22 June 2025 11:46:12 +0000 (0:00:00.099) 0:00:01.299 *********** 2025-06-22 11:46:12.152373 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:46:12.152552 | orchestrator | 2025-06-22 11:46:12.152678 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 11:46:12.153004 | orchestrator | Sunday 22 June 2025 11:46:12 +0000 (0:00:00.090) 0:00:01.389 *********** 2025-06-22 11:46:12.808981 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:46:12.809081 | orchestrator | 2025-06-22 11:46:12.809581 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 11:46:12.810378 | orchestrator | Sunday 22 June 2025 11:46:12 +0000 (0:00:00.654) 0:00:02.043 *********** 2025-06-22 11:46:12.928024 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:46:12.928203 | orchestrator | 2025-06-22 11:46:12.929486 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 11:46:12.929938 | orchestrator | 2025-06-22 11:46:12.931152 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 11:46:12.932024 | orchestrator | Sunday 22 June 2025 11:46:12 +0000 (0:00:00.119) 0:00:02.162 *********** 2025-06-22 11:46:13.131888 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:46:13.132488 | orchestrator | 2025-06-22 11:46:13.133308 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 11:46:13.134388 | orchestrator | Sunday 22 June 2025 11:46:13 +0000 (0:00:00.204) 0:00:02.366 *********** 2025-06-22 11:46:13.765872 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:46:13.766332 | orchestrator | 2025-06-22 11:46:13.767330 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 11:46:13.768470 | orchestrator | Sunday 22 June 2025 11:46:13 +0000 (0:00:00.634) 0:00:03.001 *********** 2025-06-22 11:46:13.866598 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:46:13.867132 | orchestrator | 2025-06-22 11:46:13.868543 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 11:46:13.869366 | orchestrator | 2025-06-22 11:46:13.870413 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 11:46:13.871309 | orchestrator | Sunday 22 June 2025 11:46:13 +0000 (0:00:00.099) 0:00:03.101 *********** 2025-06-22 11:46:13.951814 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:46:13.952448 | orchestrator | 2025-06-22 11:46:13.953399 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 11:46:13.953957 | orchestrator | Sunday 22 June 2025 11:46:13 +0000 (0:00:00.087) 0:00:03.188 *********** 2025-06-22 11:46:14.588778 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:46:14.591177 | orchestrator | 2025-06-22 11:46:14.591789 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 11:46:14.592301 | orchestrator | Sunday 22 June 2025 11:46:14 +0000 (0:00:00.636) 0:00:03.825 *********** 2025-06-22 11:46:14.705741 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:46:14.706755 | orchestrator | 2025-06-22 11:46:14.706968 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 11:46:14.707794 | orchestrator | 2025-06-22 11:46:14.708628 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 11:46:14.709427 | orchestrator | Sunday 22 June 2025 11:46:14 +0000 (0:00:00.114) 0:00:03.939 *********** 2025-06-22 11:46:14.811798 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:46:14.812317 | orchestrator | 2025-06-22 11:46:14.812631 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 11:46:14.813258 | orchestrator | Sunday 22 June 2025 11:46:14 +0000 (0:00:00.109) 0:00:04.048 *********** 2025-06-22 11:46:15.511495 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:46:15.512054 | orchestrator | 2025-06-22 11:46:15.512984 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 11:46:15.514015 | orchestrator | Sunday 22 June 2025 11:46:15 +0000 (0:00:00.695) 0:00:04.743 *********** 2025-06-22 11:46:15.621778 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:46:15.622320 | orchestrator | 2025-06-22 11:46:15.623196 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 11:46:15.623889 | orchestrator | 2025-06-22 11:46:15.624747 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 11:46:15.625572 | orchestrator | Sunday 22 June 2025 11:46:15 +0000 (0:00:00.111) 0:00:04.855 *********** 2025-06-22 11:46:15.719275 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:46:15.720007 | orchestrator | 2025-06-22 11:46:15.720778 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 11:46:15.721817 | orchestrator | Sunday 22 June 2025 11:46:15 +0000 (0:00:00.099) 0:00:04.955 *********** 2025-06-22 11:46:16.385435 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:46:16.385624 | orchestrator | 2025-06-22 11:46:16.386416 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 11:46:16.387482 | orchestrator | Sunday 22 June 2025 11:46:16 +0000 (0:00:00.663) 0:00:05.619 *********** 2025-06-22 11:46:16.423183 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:46:16.425209 | orchestrator | 2025-06-22 11:46:16.426873 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:46:16.426953 | orchestrator | 2025-06-22 11:46:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:46:16.426970 | orchestrator | 2025-06-22 11:46:16 | INFO  | Please wait and do not abort execution. 2025-06-22 11:46:16.428119 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:46:16.429158 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:46:16.432653 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:46:16.437034 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:46:16.437417 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:46:16.438096 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:46:16.438608 | orchestrator | 2025-06-22 11:46:16.440663 | orchestrator | 2025-06-22 11:46:16.441426 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:46:16.441864 | orchestrator | Sunday 22 June 2025 11:46:16 +0000 (0:00:00.039) 0:00:05.659 *********** 2025-06-22 11:46:16.442466 | orchestrator | =============================================================================== 2025-06-22 11:46:16.443694 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.22s 2025-06-22 11:46:16.445919 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.69s 2025-06-22 11:46:16.445980 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.58s 2025-06-22 11:46:17.183740 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-06-22 11:46:18.899353 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:46:18.899483 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:46:18.899511 | orchestrator | Registering Redlock._release_script 2025-06-22 11:46:18.958982 | orchestrator | 2025-06-22 11:46:18 | INFO  | Task 5d417f53-d2c9-4136-9f03-99b19904a724 (wait-for-connection) was prepared for execution. 2025-06-22 11:46:18.959070 | orchestrator | 2025-06-22 11:46:18 | INFO  | It takes a moment until task 5d417f53-d2c9-4136-9f03-99b19904a724 (wait-for-connection) has been started and output is visible here. 2025-06-22 11:46:23.229496 | orchestrator | 2025-06-22 11:46:23.230414 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-06-22 11:46:23.233742 | orchestrator | 2025-06-22 11:46:23.234620 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-06-22 11:46:23.235543 | orchestrator | Sunday 22 June 2025 11:46:23 +0000 (0:00:00.241) 0:00:00.241 *********** 2025-06-22 11:46:34.964252 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:46:34.964466 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:46:34.965455 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:46:34.967014 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:46:34.967833 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:46:34.969165 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:46:34.970200 | orchestrator | 2025-06-22 11:46:34.971270 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:46:34.971842 | orchestrator | 2025-06-22 11:46:34 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:46:34.972242 | orchestrator | 2025-06-22 11:46:34 | INFO  | Please wait and do not abort execution. 2025-06-22 11:46:34.975022 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:46:34.976196 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:46:34.977181 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:46:34.977621 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:46:34.978364 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:46:34.978994 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:46:34.979687 | orchestrator | 2025-06-22 11:46:34.980393 | orchestrator | 2025-06-22 11:46:34.980988 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:46:34.981673 | orchestrator | Sunday 22 June 2025 11:46:34 +0000 (0:00:11.731) 0:00:11.973 *********** 2025-06-22 11:46:34.982153 | orchestrator | =============================================================================== 2025-06-22 11:46:34.982663 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.73s 2025-06-22 11:46:35.591742 | orchestrator | + osism apply hddtemp 2025-06-22 11:46:37.281265 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:46:37.281366 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:46:37.281381 | orchestrator | Registering Redlock._release_script 2025-06-22 11:46:37.340072 | orchestrator | 2025-06-22 11:46:37 | INFO  | Task 2ca9822e-7b6c-4632-9571-13cdbd012a0c (hddtemp) was prepared for execution. 2025-06-22 11:46:37.340228 | orchestrator | 2025-06-22 11:46:37 | INFO  | It takes a moment until task 2ca9822e-7b6c-4632-9571-13cdbd012a0c (hddtemp) has been started and output is visible here. 2025-06-22 11:46:41.378470 | orchestrator | 2025-06-22 11:46:41.378591 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-06-22 11:46:41.382412 | orchestrator | 2025-06-22 11:46:41.382441 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-06-22 11:46:41.382454 | orchestrator | Sunday 22 June 2025 11:46:41 +0000 (0:00:00.273) 0:00:00.273 *********** 2025-06-22 11:46:41.550629 | orchestrator | ok: [testbed-manager] 2025-06-22 11:46:41.631541 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:46:41.708320 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:46:41.782593 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:46:41.979221 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:46:42.111195 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:46:42.112259 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:46:42.116575 | orchestrator | 2025-06-22 11:46:42.116916 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-06-22 11:46:42.118405 | orchestrator | Sunday 22 June 2025 11:46:42 +0000 (0:00:00.731) 0:00:01.005 *********** 2025-06-22 11:46:43.324987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 11:46:43.326708 | orchestrator | 2025-06-22 11:46:43.326797 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-06-22 11:46:43.327908 | orchestrator | Sunday 22 June 2025 11:46:43 +0000 (0:00:01.213) 0:00:02.218 *********** 2025-06-22 11:46:45.383051 | orchestrator | ok: [testbed-manager] 2025-06-22 11:46:45.383606 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:46:45.387293 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:46:45.387323 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:46:45.387373 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:46:45.388740 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:46:45.390630 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:46:45.391419 | orchestrator | 2025-06-22 11:46:45.392279 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-06-22 11:46:45.393121 | orchestrator | Sunday 22 June 2025 11:46:45 +0000 (0:00:02.062) 0:00:04.280 *********** 2025-06-22 11:46:46.037866 | orchestrator | changed: [testbed-manager] 2025-06-22 11:46:46.127303 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:46:46.570460 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:46:46.570563 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:46:46.571470 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:46:46.574350 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:46:46.574375 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:46:46.574388 | orchestrator | 2025-06-22 11:46:46.574401 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-06-22 11:46:46.577244 | orchestrator | Sunday 22 June 2025 11:46:46 +0000 (0:00:01.183) 0:00:05.463 *********** 2025-06-22 11:46:48.385474 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:46:48.385808 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:46:48.387596 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:46:48.388724 | orchestrator | ok: [testbed-manager] 2025-06-22 11:46:48.390099 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:46:48.391117 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:46:48.391547 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:46:48.393031 | orchestrator | 2025-06-22 11:46:48.394351 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-06-22 11:46:48.394382 | orchestrator | Sunday 22 June 2025 11:46:48 +0000 (0:00:01.818) 0:00:07.281 *********** 2025-06-22 11:46:48.880906 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:46:48.966475 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:46:49.047417 | orchestrator | changed: [testbed-manager] 2025-06-22 11:46:49.136012 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:46:49.258257 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:46:49.258697 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:46:49.259895 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:46:49.260406 | orchestrator | 2025-06-22 11:46:49.261377 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-06-22 11:46:49.262146 | orchestrator | Sunday 22 June 2025 11:46:49 +0000 (0:00:00.871) 0:00:08.153 *********** 2025-06-22 11:47:01.750753 | orchestrator | changed: [testbed-manager] 2025-06-22 11:47:01.750871 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:47:01.751479 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:47:01.752548 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:47:01.755332 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:47:01.757707 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:47:01.758654 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:47:01.761811 | orchestrator | 2025-06-22 11:47:01.762584 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-06-22 11:47:01.763477 | orchestrator | Sunday 22 June 2025 11:47:01 +0000 (0:00:12.490) 0:00:20.644 *********** 2025-06-22 11:47:03.203782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 11:47:03.204689 | orchestrator | 2025-06-22 11:47:03.208509 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-06-22 11:47:03.208541 | orchestrator | Sunday 22 June 2025 11:47:03 +0000 (0:00:01.454) 0:00:22.098 *********** 2025-06-22 11:47:05.121655 | orchestrator | changed: [testbed-manager] 2025-06-22 11:47:05.121744 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:47:05.121756 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:47:05.121766 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:47:05.123507 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:47:05.124154 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:47:05.125096 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:47:05.126414 | orchestrator | 2025-06-22 11:47:05.127375 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:47:05.128106 | orchestrator | 2025-06-22 11:47:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:47:05.128548 | orchestrator | 2025-06-22 11:47:05 | INFO  | Please wait and do not abort execution. 2025-06-22 11:47:05.130394 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:47:05.131118 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 11:47:05.132356 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 11:47:05.133376 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 11:47:05.134340 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 11:47:05.135436 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 11:47:05.136472 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 11:47:05.137587 | orchestrator | 2025-06-22 11:47:05.138122 | orchestrator | 2025-06-22 11:47:05.139067 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:47:05.139839 | orchestrator | Sunday 22 June 2025 11:47:05 +0000 (0:00:01.918) 0:00:24.017 *********** 2025-06-22 11:47:05.140846 | orchestrator | =============================================================================== 2025-06-22 11:47:05.141322 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.49s 2025-06-22 11:47:05.142170 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.06s 2025-06-22 11:47:05.142685 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.92s 2025-06-22 11:47:05.143381 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.82s 2025-06-22 11:47:05.143963 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.45s 2025-06-22 11:47:05.145187 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.21s 2025-06-22 11:47:05.145770 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.18s 2025-06-22 11:47:05.146493 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.87s 2025-06-22 11:47:05.147329 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.73s 2025-06-22 11:47:05.846285 | orchestrator | ++ semver 9.1.0 7.1.1 2025-06-22 11:47:05.909398 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-22 11:47:05.909503 | orchestrator | + sudo systemctl restart manager.service 2025-06-22 11:47:19.092533 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-22 11:47:19.092650 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-22 11:47:19.092665 | orchestrator | + local max_attempts=60 2025-06-22 11:47:19.092679 | orchestrator | + local name=ceph-ansible 2025-06-22 11:47:19.092690 | orchestrator | + local attempt_num=1 2025-06-22 11:47:19.092702 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 11:47:19.137658 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 11:47:19.137742 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 11:47:19.137756 | orchestrator | + sleep 5 2025-06-22 11:47:24.146868 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 11:47:24.174641 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 11:47:24.174710 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 11:47:24.174724 | orchestrator | + sleep 5 2025-06-22 11:47:29.178448 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 11:47:29.214336 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 11:47:29.214383 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 11:47:29.214425 | orchestrator | + sleep 5 2025-06-22 11:47:34.219181 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 11:47:34.262616 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 11:47:34.262700 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 11:47:34.262714 | orchestrator | + sleep 5 2025-06-22 11:47:39.268398 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 11:47:39.311916 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 11:47:39.312010 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 11:47:39.312024 | orchestrator | + sleep 5 2025-06-22 11:47:44.317321 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 11:47:44.351300 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 11:47:44.351384 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 11:47:44.351395 | orchestrator | + sleep 5 2025-06-22 11:47:49.356798 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 11:47:49.399106 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 11:47:49.399192 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 11:47:49.399206 | orchestrator | + sleep 5 2025-06-22 11:47:54.405225 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 11:47:54.441415 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 11:47:54.441470 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 11:47:54.441475 | orchestrator | + sleep 5 2025-06-22 11:47:59.444719 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 11:47:59.488616 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 11:47:59.488707 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 11:47:59.488722 | orchestrator | + sleep 5 2025-06-22 11:48:04.491919 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 11:48:04.529735 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 11:48:04.529805 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 11:48:04.529818 | orchestrator | + sleep 5 2025-06-22 11:48:09.533994 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 11:48:09.574394 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 11:48:09.574529 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 11:48:09.574545 | orchestrator | + sleep 5 2025-06-22 11:48:14.579819 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 11:48:14.623053 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 11:48:14.623133 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 11:48:14.623147 | orchestrator | + sleep 5 2025-06-22 11:48:19.628776 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 11:48:19.671776 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 11:48:19.671863 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 11:48:19.671876 | orchestrator | + sleep 5 2025-06-22 11:48:24.677498 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 11:48:24.726369 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 11:48:24.726529 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-22 11:48:24.726546 | orchestrator | + local max_attempts=60 2025-06-22 11:48:24.726560 | orchestrator | + local name=kolla-ansible 2025-06-22 11:48:24.726571 | orchestrator | + local attempt_num=1 2025-06-22 11:48:24.727371 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-22 11:48:24.765156 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 11:48:24.765248 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-22 11:48:24.765263 | orchestrator | + local max_attempts=60 2025-06-22 11:48:24.765276 | orchestrator | + local name=osism-ansible 2025-06-22 11:48:24.765287 | orchestrator | + local attempt_num=1 2025-06-22 11:48:24.766195 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-22 11:48:24.811517 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 11:48:24.811605 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-22 11:48:24.811618 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-22 11:48:25.002885 | orchestrator | ARA in ceph-ansible already disabled. 2025-06-22 11:48:25.170954 | orchestrator | ARA in kolla-ansible already disabled. 2025-06-22 11:48:25.336847 | orchestrator | ARA in osism-ansible already disabled. 2025-06-22 11:48:25.496675 | orchestrator | ARA in osism-kubernetes already disabled. 2025-06-22 11:48:25.497089 | orchestrator | + osism apply gather-facts 2025-06-22 11:48:27.370244 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:48:27.370347 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:48:27.370363 | orchestrator | Registering Redlock._release_script 2025-06-22 11:48:27.431573 | orchestrator | 2025-06-22 11:48:27 | INFO  | Task c8554247-72d6-4b57-b589-535d82cd95dd (gather-facts) was prepared for execution. 2025-06-22 11:48:27.431668 | orchestrator | 2025-06-22 11:48:27 | INFO  | It takes a moment until task c8554247-72d6-4b57-b589-535d82cd95dd (gather-facts) has been started and output is visible here. 2025-06-22 11:48:31.382623 | orchestrator | 2025-06-22 11:48:31.382741 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 11:48:31.382759 | orchestrator | 2025-06-22 11:48:31.382770 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 11:48:31.382859 | orchestrator | Sunday 22 June 2025 11:48:31 +0000 (0:00:00.213) 0:00:00.213 *********** 2025-06-22 11:48:38.236532 | orchestrator | ok: [testbed-manager] 2025-06-22 11:48:38.237767 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:48:38.239322 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:48:38.240995 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:48:38.242396 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:48:38.244799 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:48:38.245335 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:48:38.250124 | orchestrator | 2025-06-22 11:48:38.250488 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-22 11:48:38.251327 | orchestrator | 2025-06-22 11:48:38.252230 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-22 11:48:38.252982 | orchestrator | Sunday 22 June 2025 11:48:38 +0000 (0:00:06.858) 0:00:07.071 *********** 2025-06-22 11:48:38.367296 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:48:38.437130 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:48:38.505023 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:48:38.580258 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:48:38.651862 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:48:38.691854 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:48:38.691929 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:48:38.691943 | orchestrator | 2025-06-22 11:48:38.692212 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:48:38.692506 | orchestrator | 2025-06-22 11:48:38 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:48:38.692680 | orchestrator | 2025-06-22 11:48:38 | INFO  | Please wait and do not abort execution. 2025-06-22 11:48:38.693433 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 11:48:38.693766 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 11:48:38.695098 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 11:48:38.695396 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 11:48:38.695756 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 11:48:38.696087 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 11:48:38.696521 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 11:48:38.696748 | orchestrator | 2025-06-22 11:48:38.697004 | orchestrator | 2025-06-22 11:48:38.697330 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:48:38.697539 | orchestrator | Sunday 22 June 2025 11:48:38 +0000 (0:00:00.455) 0:00:07.527 *********** 2025-06-22 11:48:38.697847 | orchestrator | =============================================================================== 2025-06-22 11:48:38.698145 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.86s 2025-06-22 11:48:38.698438 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2025-06-22 11:48:39.154371 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-06-22 11:48:39.173978 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-06-22 11:48:39.191944 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-06-22 11:48:39.208718 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-06-22 11:48:39.222951 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-06-22 11:48:39.240710 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-06-22 11:48:39.255941 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-06-22 11:48:39.269116 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-06-22 11:48:39.281109 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-06-22 11:48:39.292221 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-06-22 11:48:39.303719 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-06-22 11:48:39.314092 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-06-22 11:48:39.324250 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-06-22 11:48:39.334607 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-06-22 11:48:39.345332 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-06-22 11:48:39.362121 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-06-22 11:48:39.371279 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-06-22 11:48:39.380577 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-06-22 11:48:39.390612 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-06-22 11:48:39.400678 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-06-22 11:48:39.410853 | orchestrator | + [[ false == \t\r\u\e ]] 2025-06-22 11:48:39.904120 | orchestrator | ok: Runtime: 0:20:27.213469 2025-06-22 11:48:39.996574 | 2025-06-22 11:48:39.996682 | TASK [Deploy services] 2025-06-22 11:48:40.530262 | orchestrator | skipping: Conditional result was False 2025-06-22 11:48:40.548420 | 2025-06-22 11:48:40.548589 | TASK [Deploy in a nutshell] 2025-06-22 11:48:41.240994 | orchestrator | + set -e 2025-06-22 11:48:41.241106 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 11:48:41.241117 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 11:48:41.241126 | orchestrator | ++ INTERACTIVE=false 2025-06-22 11:48:41.241131 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 11:48:41.241136 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 11:48:41.241142 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 11:48:41.241164 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 11:48:41.241176 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 11:48:41.241181 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 11:48:41.241187 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 11:48:41.241199 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 11:48:41.241207 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 11:48:41.241211 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 11:48:41.241219 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 11:48:41.241223 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 11:48:41.241228 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 11:48:41.241232 | orchestrator | ++ export ARA=false 2025-06-22 11:48:41.241236 | orchestrator | ++ ARA=false 2025-06-22 11:48:41.241240 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 11:48:41.241246 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 11:48:41.241249 | orchestrator | ++ export TEMPEST=false 2025-06-22 11:48:41.241253 | orchestrator | ++ TEMPEST=false 2025-06-22 11:48:41.241376 | orchestrator | 2025-06-22 11:48:41.241382 | orchestrator | # PULL IMAGES 2025-06-22 11:48:41.241386 | orchestrator | 2025-06-22 11:48:41.241390 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 11:48:41.241394 | orchestrator | ++ IS_ZUUL=true 2025-06-22 11:48:41.241398 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.200 2025-06-22 11:48:41.241402 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.200 2025-06-22 11:48:41.241406 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 11:48:41.241410 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 11:48:41.241414 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 11:48:41.241418 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 11:48:41.241422 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 11:48:41.241425 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 11:48:41.241429 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 11:48:41.241437 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 11:48:41.241441 | orchestrator | + echo 2025-06-22 11:48:41.241445 | orchestrator | + echo '# PULL IMAGES' 2025-06-22 11:48:41.241448 | orchestrator | + echo 2025-06-22 11:48:41.243354 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-22 11:48:41.297252 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-22 11:48:41.297333 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-06-22 11:48:42.980081 | orchestrator | 2025-06-22 11:48:42 | INFO  | Trying to run play pull-images in environment custom 2025-06-22 11:48:42.984671 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:48:42.984710 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:48:42.984723 | orchestrator | Registering Redlock._release_script 2025-06-22 11:48:43.045240 | orchestrator | 2025-06-22 11:48:43 | INFO  | Task 126a4fd4-2a6d-46dd-86fa-97e70f921dfc (pull-images) was prepared for execution. 2025-06-22 11:48:43.045327 | orchestrator | 2025-06-22 11:48:43 | INFO  | It takes a moment until task 126a4fd4-2a6d-46dd-86fa-97e70f921dfc (pull-images) has been started and output is visible here. 2025-06-22 11:48:47.119633 | orchestrator | 2025-06-22 11:48:47.122004 | orchestrator | PLAY [Pull images] ************************************************************* 2025-06-22 11:48:47.122804 | orchestrator | 2025-06-22 11:48:47.124684 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-06-22 11:48:47.125622 | orchestrator | Sunday 22 June 2025 11:48:47 +0000 (0:00:00.167) 0:00:00.167 *********** 2025-06-22 11:49:55.347214 | orchestrator | changed: [testbed-manager] 2025-06-22 11:49:55.347425 | orchestrator | 2025-06-22 11:49:55.347464 | orchestrator | TASK [Pull other images] ******************************************************* 2025-06-22 11:49:55.348212 | orchestrator | Sunday 22 June 2025 11:49:55 +0000 (0:01:08.226) 0:01:08.394 *********** 2025-06-22 11:50:48.287819 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-06-22 11:50:48.288106 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-06-22 11:50:48.288155 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-06-22 11:50:48.288766 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-06-22 11:50:48.289003 | orchestrator | changed: [testbed-manager] => (item=common) 2025-06-22 11:50:48.289658 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-06-22 11:50:48.290445 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-06-22 11:50:48.291193 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-06-22 11:50:48.291628 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-06-22 11:50:48.292386 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-06-22 11:50:48.293157 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-06-22 11:50:48.293875 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-06-22 11:50:48.295032 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-06-22 11:50:48.295473 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-06-22 11:50:48.295764 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-06-22 11:50:48.296371 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-06-22 11:50:48.296844 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-06-22 11:50:48.297358 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-06-22 11:50:48.297678 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-06-22 11:50:48.298135 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-06-22 11:50:48.298401 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-06-22 11:50:48.298891 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-06-22 11:50:48.299500 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-06-22 11:50:48.299801 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-06-22 11:50:48.300269 | orchestrator | 2025-06-22 11:50:48.300503 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:50:48.301584 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:50:48.302164 | orchestrator | 2025-06-22 11:50:48.302527 | orchestrator | 2025-06-22 11:50:48.303427 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:50:48.303498 | orchestrator | 2025-06-22 11:50:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:50:48.303521 | orchestrator | 2025-06-22 11:50:48 | INFO  | Please wait and do not abort execution. 2025-06-22 11:50:48.303952 | orchestrator | Sunday 22 June 2025 11:50:48 +0000 (0:00:52.944) 0:02:01.338 *********** 2025-06-22 11:50:48.304425 | orchestrator | =============================================================================== 2025-06-22 11:50:48.304591 | orchestrator | Pull keystone image ---------------------------------------------------- 68.23s 2025-06-22 11:50:48.305230 | orchestrator | Pull other images ------------------------------------------------------ 52.94s 2025-06-22 11:50:50.646347 | orchestrator | 2025-06-22 11:50:50 | INFO  | Trying to run play wipe-partitions in environment custom 2025-06-22 11:50:50.653159 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:50:50.653201 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:50:50.653214 | orchestrator | Registering Redlock._release_script 2025-06-22 11:50:50.719145 | orchestrator | 2025-06-22 11:50:50 | INFO  | Task 37fa2542-c558-44df-8677-f621089e0517 (wipe-partitions) was prepared for execution. 2025-06-22 11:50:50.719241 | orchestrator | 2025-06-22 11:50:50 | INFO  | It takes a moment until task 37fa2542-c558-44df-8677-f621089e0517 (wipe-partitions) has been started and output is visible here. 2025-06-22 11:50:54.780155 | orchestrator | 2025-06-22 11:50:54.782239 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-06-22 11:50:54.782273 | orchestrator | 2025-06-22 11:50:54.782585 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-06-22 11:50:54.783794 | orchestrator | Sunday 22 June 2025 11:50:54 +0000 (0:00:00.097) 0:00:00.097 *********** 2025-06-22 11:50:55.422345 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:50:55.422459 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:50:55.422473 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:50:55.422484 | orchestrator | 2025-06-22 11:50:55.422496 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-06-22 11:50:55.422508 | orchestrator | Sunday 22 June 2025 11:50:55 +0000 (0:00:00.637) 0:00:00.734 *********** 2025-06-22 11:50:55.556900 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:50:55.627998 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:50:55.628914 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:50:55.629278 | orchestrator | 2025-06-22 11:50:55.629575 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-06-22 11:50:55.629749 | orchestrator | Sunday 22 June 2025 11:50:55 +0000 (0:00:00.208) 0:00:00.943 *********** 2025-06-22 11:50:56.370172 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:50:56.372214 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:50:56.372629 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:50:56.372654 | orchestrator | 2025-06-22 11:50:56.372836 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-06-22 11:50:56.373116 | orchestrator | Sunday 22 June 2025 11:50:56 +0000 (0:00:00.744) 0:00:01.687 *********** 2025-06-22 11:50:56.543568 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:50:56.621471 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:50:56.621694 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:50:56.621902 | orchestrator | 2025-06-22 11:50:56.622818 | orchestrator | TASK [Check device availability] *********************************************** 2025-06-22 11:50:56.622850 | orchestrator | Sunday 22 June 2025 11:50:56 +0000 (0:00:00.252) 0:00:01.939 *********** 2025-06-22 11:50:57.818305 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-22 11:50:57.818396 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-22 11:50:57.818410 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-22 11:50:57.818463 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-22 11:50:57.818476 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-22 11:50:57.818546 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-22 11:50:57.818852 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-22 11:50:57.819229 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-22 11:50:57.822005 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-22 11:50:57.822170 | orchestrator | 2025-06-22 11:50:57.822457 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-06-22 11:50:57.822761 | orchestrator | Sunday 22 June 2025 11:50:57 +0000 (0:00:01.191) 0:00:03.131 *********** 2025-06-22 11:50:59.127092 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-06-22 11:50:59.127188 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-06-22 11:50:59.127351 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-06-22 11:50:59.130165 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-06-22 11:50:59.130285 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-06-22 11:50:59.130721 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-06-22 11:50:59.131151 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-06-22 11:50:59.135813 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-06-22 11:50:59.135838 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-06-22 11:50:59.135850 | orchestrator | 2025-06-22 11:50:59.135862 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-06-22 11:50:59.135874 | orchestrator | Sunday 22 June 2025 11:50:59 +0000 (0:00:01.310) 0:00:04.441 *********** 2025-06-22 11:51:02.271139 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-22 11:51:02.271405 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-22 11:51:02.271727 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-22 11:51:02.272194 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-22 11:51:02.272468 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-22 11:51:02.272839 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-22 11:51:02.273135 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-22 11:51:02.273469 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-22 11:51:02.273852 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-22 11:51:02.274268 | orchestrator | 2025-06-22 11:51:02.275300 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-06-22 11:51:02.275649 | orchestrator | Sunday 22 June 2025 11:51:02 +0000 (0:00:03.145) 0:00:07.586 *********** 2025-06-22 11:51:02.826796 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:51:02.826988 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:51:02.827007 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:51:02.827283 | orchestrator | 2025-06-22 11:51:02.828005 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-06-22 11:51:02.831210 | orchestrator | Sunday 22 June 2025 11:51:02 +0000 (0:00:00.556) 0:00:08.142 *********** 2025-06-22 11:51:03.439955 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:51:03.440502 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:51:03.442294 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:51:03.442440 | orchestrator | 2025-06-22 11:51:03.442922 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:51:03.445507 | orchestrator | 2025-06-22 11:51:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:51:03.445536 | orchestrator | 2025-06-22 11:51:03 | INFO  | Please wait and do not abort execution. 2025-06-22 11:51:03.446116 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:51:03.446766 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:51:03.447124 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:51:03.447716 | orchestrator | 2025-06-22 11:51:03.448139 | orchestrator | 2025-06-22 11:51:03.448806 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:51:03.449111 | orchestrator | Sunday 22 June 2025 11:51:03 +0000 (0:00:00.606) 0:00:08.749 *********** 2025-06-22 11:51:03.449533 | orchestrator | =============================================================================== 2025-06-22 11:51:03.449999 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.15s 2025-06-22 11:51:03.450383 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.31s 2025-06-22 11:51:03.451052 | orchestrator | Check device availability ----------------------------------------------- 1.19s 2025-06-22 11:51:03.451312 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.74s 2025-06-22 11:51:03.451969 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.64s 2025-06-22 11:51:03.452155 | orchestrator | Request device events from the kernel ----------------------------------- 0.61s 2025-06-22 11:51:03.452674 | orchestrator | Reload udev rules ------------------------------------------------------- 0.56s 2025-06-22 11:51:03.453054 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2025-06-22 11:51:03.453895 | orchestrator | Remove all rook related logical devices --------------------------------- 0.21s 2025-06-22 11:51:06.114202 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:51:06.114321 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:51:06.114345 | orchestrator | Registering Redlock._release_script 2025-06-22 11:51:06.177292 | orchestrator | 2025-06-22 11:51:06 | INFO  | Task dbcbcfa8-bf6c-43a7-a7d7-83cdbf97709a (facts) was prepared for execution. 2025-06-22 11:51:06.177396 | orchestrator | 2025-06-22 11:51:06 | INFO  | It takes a moment until task dbcbcfa8-bf6c-43a7-a7d7-83cdbf97709a (facts) has been started and output is visible here. 2025-06-22 11:51:11.012369 | orchestrator | 2025-06-22 11:51:11.014863 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-22 11:51:11.015228 | orchestrator | 2025-06-22 11:51:11.015878 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-22 11:51:11.016332 | orchestrator | Sunday 22 June 2025 11:51:11 +0000 (0:00:00.277) 0:00:00.277 *********** 2025-06-22 11:51:11.722531 | orchestrator | ok: [testbed-manager] 2025-06-22 11:51:12.198286 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:51:12.199113 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:51:12.200661 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:51:12.202226 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:51:12.205672 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:51:12.206116 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:51:12.206643 | orchestrator | 2025-06-22 11:51:12.209042 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-22 11:51:12.209097 | orchestrator | Sunday 22 June 2025 11:51:12 +0000 (0:00:01.183) 0:00:01.461 *********** 2025-06-22 11:51:12.373235 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:51:12.457360 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:51:12.536524 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:51:12.609118 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:51:12.678486 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:13.352298 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:13.355397 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:51:13.356264 | orchestrator | 2025-06-22 11:51:13.357330 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 11:51:13.360370 | orchestrator | 2025-06-22 11:51:13.360964 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 11:51:13.365507 | orchestrator | Sunday 22 June 2025 11:51:13 +0000 (0:00:01.157) 0:00:02.619 *********** 2025-06-22 11:51:15.523087 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:51:19.814829 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:51:19.817021 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:51:19.820023 | orchestrator | ok: [testbed-manager] 2025-06-22 11:51:19.822890 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:51:19.826774 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:51:19.828021 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:51:19.829189 | orchestrator | 2025-06-22 11:51:19.831953 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-22 11:51:19.832451 | orchestrator | 2025-06-22 11:51:19.833883 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-22 11:51:19.835836 | orchestrator | Sunday 22 June 2025 11:51:19 +0000 (0:00:06.464) 0:00:09.083 *********** 2025-06-22 11:51:19.974048 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:51:20.049812 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:51:20.148579 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:51:20.254789 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:51:20.338276 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:20.379582 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:20.381232 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:51:20.382960 | orchestrator | 2025-06-22 11:51:20.385655 | orchestrator | 2025-06-22 11:51:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:51:20.385682 | orchestrator | 2025-06-22 11:51:20 | INFO  | Please wait and do not abort execution. 2025-06-22 11:51:20.386089 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:51:20.386799 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:51:20.387817 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:51:20.388708 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:51:20.389831 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:51:20.391071 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:51:20.391401 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:51:20.392329 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:51:20.392776 | orchestrator | 2025-06-22 11:51:20.394763 | orchestrator | 2025-06-22 11:51:20.395959 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:51:20.397709 | orchestrator | Sunday 22 June 2025 11:51:20 +0000 (0:00:00.564) 0:00:09.648 *********** 2025-06-22 11:51:20.398161 | orchestrator | =============================================================================== 2025-06-22 11:51:20.399530 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.46s 2025-06-22 11:51:20.400414 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.18s 2025-06-22 11:51:20.401368 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.16s 2025-06-22 11:51:20.403168 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2025-06-22 11:51:22.957142 | orchestrator | 2025-06-22 11:51:22 | INFO  | Task 164fc55f-3a99-4183-b3d9-4079cbb73635 (ceph-configure-lvm-volumes) was prepared for execution. 2025-06-22 11:51:22.957252 | orchestrator | 2025-06-22 11:51:22 | INFO  | It takes a moment until task 164fc55f-3a99-4183-b3d9-4079cbb73635 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-06-22 11:51:27.184634 | orchestrator | 2025-06-22 11:51:27.187966 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-22 11:51:27.190497 | orchestrator | 2025-06-22 11:51:27.191079 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 11:51:27.191711 | orchestrator | Sunday 22 June 2025 11:51:27 +0000 (0:00:00.319) 0:00:00.319 *********** 2025-06-22 11:51:27.427754 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 11:51:27.428393 | orchestrator | 2025-06-22 11:51:27.431371 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 11:51:27.431869 | orchestrator | Sunday 22 June 2025 11:51:27 +0000 (0:00:00.244) 0:00:00.563 *********** 2025-06-22 11:51:27.644559 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:51:27.646118 | orchestrator | 2025-06-22 11:51:27.648422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:27.649304 | orchestrator | Sunday 22 June 2025 11:51:27 +0000 (0:00:00.218) 0:00:00.782 *********** 2025-06-22 11:51:28.034738 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-22 11:51:28.034855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-22 11:51:28.034876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-22 11:51:28.035486 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-22 11:51:28.036729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-22 11:51:28.037113 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-22 11:51:28.038944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-22 11:51:28.039615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-22 11:51:28.039877 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-22 11:51:28.041133 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-22 11:51:28.042453 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-22 11:51:28.042487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-22 11:51:28.043026 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-22 11:51:28.044095 | orchestrator | 2025-06-22 11:51:28.044162 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:28.045809 | orchestrator | Sunday 22 June 2025 11:51:28 +0000 (0:00:00.383) 0:00:01.166 *********** 2025-06-22 11:51:28.585954 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:28.587142 | orchestrator | 2025-06-22 11:51:28.589882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:28.591322 | orchestrator | Sunday 22 June 2025 11:51:28 +0000 (0:00:00.555) 0:00:01.721 *********** 2025-06-22 11:51:28.805084 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:28.805188 | orchestrator | 2025-06-22 11:51:28.805804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:28.805831 | orchestrator | Sunday 22 June 2025 11:51:28 +0000 (0:00:00.220) 0:00:01.941 *********** 2025-06-22 11:51:29.003856 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:29.007952 | orchestrator | 2025-06-22 11:51:29.010805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:29.012380 | orchestrator | Sunday 22 June 2025 11:51:28 +0000 (0:00:00.199) 0:00:02.141 *********** 2025-06-22 11:51:29.210402 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:29.210813 | orchestrator | 2025-06-22 11:51:29.212748 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:29.212791 | orchestrator | Sunday 22 June 2025 11:51:29 +0000 (0:00:00.205) 0:00:02.347 *********** 2025-06-22 11:51:29.405930 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:29.406199 | orchestrator | 2025-06-22 11:51:29.406717 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:29.408186 | orchestrator | Sunday 22 June 2025 11:51:29 +0000 (0:00:00.196) 0:00:02.543 *********** 2025-06-22 11:51:29.633251 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:29.633356 | orchestrator | 2025-06-22 11:51:29.633508 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:29.633818 | orchestrator | Sunday 22 June 2025 11:51:29 +0000 (0:00:00.227) 0:00:02.770 *********** 2025-06-22 11:51:29.831019 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:29.831767 | orchestrator | 2025-06-22 11:51:29.831807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:29.832789 | orchestrator | Sunday 22 June 2025 11:51:29 +0000 (0:00:00.198) 0:00:02.968 *********** 2025-06-22 11:51:30.036979 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:30.038065 | orchestrator | 2025-06-22 11:51:30.040308 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:30.041947 | orchestrator | Sunday 22 June 2025 11:51:30 +0000 (0:00:00.206) 0:00:03.175 *********** 2025-06-22 11:51:30.441558 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41) 2025-06-22 11:51:30.442317 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41) 2025-06-22 11:51:30.443335 | orchestrator | 2025-06-22 11:51:30.444331 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:30.447293 | orchestrator | Sunday 22 June 2025 11:51:30 +0000 (0:00:00.402) 0:00:03.577 *********** 2025-06-22 11:51:30.872739 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4b47f8cd-db2a-4bea-898d-3d48c49a84c2) 2025-06-22 11:51:30.874265 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4b47f8cd-db2a-4bea-898d-3d48c49a84c2) 2025-06-22 11:51:30.875571 | orchestrator | 2025-06-22 11:51:30.876673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:30.877905 | orchestrator | Sunday 22 June 2025 11:51:30 +0000 (0:00:00.432) 0:00:04.010 *********** 2025-06-22 11:51:31.516107 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7610229b-d7bf-450f-9964-1d42e936a357) 2025-06-22 11:51:31.518522 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7610229b-d7bf-450f-9964-1d42e936a357) 2025-06-22 11:51:31.518559 | orchestrator | 2025-06-22 11:51:31.522245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:31.522291 | orchestrator | Sunday 22 June 2025 11:51:31 +0000 (0:00:00.635) 0:00:04.646 *********** 2025-06-22 11:51:32.153343 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c288123e-75d1-4d08-8561-55f7fbbd7c1b) 2025-06-22 11:51:32.155238 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c288123e-75d1-4d08-8561-55f7fbbd7c1b) 2025-06-22 11:51:32.155812 | orchestrator | 2025-06-22 11:51:32.156850 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:32.157946 | orchestrator | Sunday 22 June 2025 11:51:32 +0000 (0:00:00.642) 0:00:05.288 *********** 2025-06-22 11:51:32.982245 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 11:51:32.982932 | orchestrator | 2025-06-22 11:51:32.987868 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:32.989276 | orchestrator | Sunday 22 June 2025 11:51:32 +0000 (0:00:00.829) 0:00:06.118 *********** 2025-06-22 11:51:33.382312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-22 11:51:33.382486 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-22 11:51:33.385228 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-22 11:51:33.385770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-22 11:51:33.386701 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-22 11:51:33.386983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-22 11:51:33.387550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-22 11:51:33.388312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-22 11:51:33.389070 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-22 11:51:33.390374 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-22 11:51:33.390772 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-22 11:51:33.391319 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-22 11:51:33.391790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-22 11:51:33.395151 | orchestrator | 2025-06-22 11:51:33.396073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:33.396098 | orchestrator | Sunday 22 June 2025 11:51:33 +0000 (0:00:00.397) 0:00:06.516 *********** 2025-06-22 11:51:33.612133 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:33.614918 | orchestrator | 2025-06-22 11:51:33.614959 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:33.614974 | orchestrator | Sunday 22 June 2025 11:51:33 +0000 (0:00:00.233) 0:00:06.749 *********** 2025-06-22 11:51:33.824968 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:33.828015 | orchestrator | 2025-06-22 11:51:33.828051 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:33.828065 | orchestrator | Sunday 22 June 2025 11:51:33 +0000 (0:00:00.209) 0:00:06.960 *********** 2025-06-22 11:51:34.029305 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:34.030301 | orchestrator | 2025-06-22 11:51:34.032485 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:34.033529 | orchestrator | Sunday 22 June 2025 11:51:34 +0000 (0:00:00.205) 0:00:07.166 *********** 2025-06-22 11:51:34.243454 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:34.244619 | orchestrator | 2025-06-22 11:51:34.245961 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:34.247342 | orchestrator | Sunday 22 June 2025 11:51:34 +0000 (0:00:00.213) 0:00:07.380 *********** 2025-06-22 11:51:34.459515 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:34.459695 | orchestrator | 2025-06-22 11:51:34.459813 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:34.460509 | orchestrator | Sunday 22 June 2025 11:51:34 +0000 (0:00:00.212) 0:00:07.593 *********** 2025-06-22 11:51:34.664328 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:34.664642 | orchestrator | 2025-06-22 11:51:34.664769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:34.665115 | orchestrator | Sunday 22 June 2025 11:51:34 +0000 (0:00:00.206) 0:00:07.800 *********** 2025-06-22 11:51:34.842541 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:34.842696 | orchestrator | 2025-06-22 11:51:34.843145 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:34.846526 | orchestrator | Sunday 22 June 2025 11:51:34 +0000 (0:00:00.177) 0:00:07.978 *********** 2025-06-22 11:51:35.029376 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:35.031354 | orchestrator | 2025-06-22 11:51:35.035915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:35.036043 | orchestrator | Sunday 22 June 2025 11:51:35 +0000 (0:00:00.188) 0:00:08.166 *********** 2025-06-22 11:51:36.183220 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-22 11:51:36.184799 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-22 11:51:36.188501 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-22 11:51:36.188836 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-22 11:51:36.191335 | orchestrator | 2025-06-22 11:51:36.192846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:36.194627 | orchestrator | Sunday 22 June 2025 11:51:36 +0000 (0:00:01.153) 0:00:09.320 *********** 2025-06-22 11:51:36.422183 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:36.423133 | orchestrator | 2025-06-22 11:51:36.425034 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:36.425866 | orchestrator | Sunday 22 June 2025 11:51:36 +0000 (0:00:00.232) 0:00:09.552 *********** 2025-06-22 11:51:36.620281 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:36.623408 | orchestrator | 2025-06-22 11:51:36.623435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:36.626281 | orchestrator | Sunday 22 June 2025 11:51:36 +0000 (0:00:00.203) 0:00:09.756 *********** 2025-06-22 11:51:36.864027 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:36.864642 | orchestrator | 2025-06-22 11:51:36.867608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:36.868113 | orchestrator | Sunday 22 June 2025 11:51:36 +0000 (0:00:00.243) 0:00:09.999 *********** 2025-06-22 11:51:37.073416 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:37.073518 | orchestrator | 2025-06-22 11:51:37.073534 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-22 11:51:37.074092 | orchestrator | Sunday 22 June 2025 11:51:37 +0000 (0:00:00.211) 0:00:10.210 *********** 2025-06-22 11:51:37.280091 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-06-22 11:51:37.280324 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-06-22 11:51:37.281006 | orchestrator | 2025-06-22 11:51:37.283297 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-22 11:51:37.292544 | orchestrator | Sunday 22 June 2025 11:51:37 +0000 (0:00:00.204) 0:00:10.414 *********** 2025-06-22 11:51:37.434141 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:37.434393 | orchestrator | 2025-06-22 11:51:37.435679 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-22 11:51:37.436751 | orchestrator | Sunday 22 June 2025 11:51:37 +0000 (0:00:00.156) 0:00:10.571 *********** 2025-06-22 11:51:37.602916 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:37.603019 | orchestrator | 2025-06-22 11:51:37.603351 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-22 11:51:37.603674 | orchestrator | Sunday 22 June 2025 11:51:37 +0000 (0:00:00.168) 0:00:10.740 *********** 2025-06-22 11:51:37.756572 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:37.757539 | orchestrator | 2025-06-22 11:51:37.761252 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-22 11:51:37.761547 | orchestrator | Sunday 22 June 2025 11:51:37 +0000 (0:00:00.155) 0:00:10.895 *********** 2025-06-22 11:51:37.903426 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:51:37.903512 | orchestrator | 2025-06-22 11:51:37.903527 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-22 11:51:37.903834 | orchestrator | Sunday 22 June 2025 11:51:37 +0000 (0:00:00.147) 0:00:11.042 *********** 2025-06-22 11:51:38.088790 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'}}) 2025-06-22 11:51:38.094680 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0b51a6ec-8722-57c7-ad6b-56758d62ede6'}}) 2025-06-22 11:51:38.094859 | orchestrator | 2025-06-22 11:51:38.095194 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-22 11:51:38.096822 | orchestrator | Sunday 22 June 2025 11:51:38 +0000 (0:00:00.183) 0:00:11.226 *********** 2025-06-22 11:51:38.302256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'}})  2025-06-22 11:51:38.303479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0b51a6ec-8722-57c7-ad6b-56758d62ede6'}})  2025-06-22 11:51:38.303517 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:38.304157 | orchestrator | 2025-06-22 11:51:38.304369 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-22 11:51:38.305235 | orchestrator | Sunday 22 June 2025 11:51:38 +0000 (0:00:00.208) 0:00:11.434 *********** 2025-06-22 11:51:38.708714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'}})  2025-06-22 11:51:38.708887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0b51a6ec-8722-57c7-ad6b-56758d62ede6'}})  2025-06-22 11:51:38.709456 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:38.710216 | orchestrator | 2025-06-22 11:51:38.711797 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-22 11:51:38.711954 | orchestrator | Sunday 22 June 2025 11:51:38 +0000 (0:00:00.412) 0:00:11.847 *********** 2025-06-22 11:51:38.854940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'}})  2025-06-22 11:51:38.855661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0b51a6ec-8722-57c7-ad6b-56758d62ede6'}})  2025-06-22 11:51:38.857423 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:38.860409 | orchestrator | 2025-06-22 11:51:38.863250 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-22 11:51:38.864508 | orchestrator | Sunday 22 June 2025 11:51:38 +0000 (0:00:00.145) 0:00:11.993 *********** 2025-06-22 11:51:39.004682 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:51:39.006578 | orchestrator | 2025-06-22 11:51:39.007190 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-22 11:51:39.008688 | orchestrator | Sunday 22 June 2025 11:51:38 +0000 (0:00:00.145) 0:00:12.139 *********** 2025-06-22 11:51:39.156093 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:51:39.156317 | orchestrator | 2025-06-22 11:51:39.158902 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-22 11:51:39.160752 | orchestrator | Sunday 22 June 2025 11:51:39 +0000 (0:00:00.151) 0:00:12.291 *********** 2025-06-22 11:51:39.316159 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:39.317694 | orchestrator | 2025-06-22 11:51:39.319182 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-22 11:51:39.320956 | orchestrator | Sunday 22 June 2025 11:51:39 +0000 (0:00:00.160) 0:00:12.452 *********** 2025-06-22 11:51:39.441066 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:39.441839 | orchestrator | 2025-06-22 11:51:39.443312 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-22 11:51:39.448563 | orchestrator | Sunday 22 June 2025 11:51:39 +0000 (0:00:00.126) 0:00:12.579 *********** 2025-06-22 11:51:39.580725 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:39.584146 | orchestrator | 2025-06-22 11:51:39.586178 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-22 11:51:39.588296 | orchestrator | Sunday 22 June 2025 11:51:39 +0000 (0:00:00.136) 0:00:12.715 *********** 2025-06-22 11:51:39.720468 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 11:51:39.721200 | orchestrator |  "ceph_osd_devices": { 2025-06-22 11:51:39.722730 | orchestrator |  "sdb": { 2025-06-22 11:51:39.723793 | orchestrator |  "osd_lvm_uuid": "6ffadd37-6b10-5a4f-8f0b-2da52ae5008f" 2025-06-22 11:51:39.727993 | orchestrator |  }, 2025-06-22 11:51:39.728743 | orchestrator |  "sdc": { 2025-06-22 11:51:39.729721 | orchestrator |  "osd_lvm_uuid": "0b51a6ec-8722-57c7-ad6b-56758d62ede6" 2025-06-22 11:51:39.730367 | orchestrator |  } 2025-06-22 11:51:39.731198 | orchestrator |  } 2025-06-22 11:51:39.734149 | orchestrator | } 2025-06-22 11:51:39.734919 | orchestrator | 2025-06-22 11:51:39.735719 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-22 11:51:39.736279 | orchestrator | Sunday 22 June 2025 11:51:39 +0000 (0:00:00.141) 0:00:12.856 *********** 2025-06-22 11:51:39.872673 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:39.876242 | orchestrator | 2025-06-22 11:51:39.877430 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-22 11:51:39.878188 | orchestrator | Sunday 22 June 2025 11:51:39 +0000 (0:00:00.150) 0:00:13.007 *********** 2025-06-22 11:51:40.005388 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:40.005669 | orchestrator | 2025-06-22 11:51:40.007635 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-22 11:51:40.012150 | orchestrator | Sunday 22 June 2025 11:51:39 +0000 (0:00:00.135) 0:00:13.143 *********** 2025-06-22 11:51:40.147474 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:51:40.148271 | orchestrator | 2025-06-22 11:51:40.149127 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-22 11:51:40.150004 | orchestrator | Sunday 22 June 2025 11:51:40 +0000 (0:00:00.142) 0:00:13.285 *********** 2025-06-22 11:51:40.360818 | orchestrator | changed: [testbed-node-3] => { 2025-06-22 11:51:40.362146 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-22 11:51:40.366814 | orchestrator |  "ceph_osd_devices": { 2025-06-22 11:51:40.367432 | orchestrator |  "sdb": { 2025-06-22 11:51:40.368315 | orchestrator |  "osd_lvm_uuid": "6ffadd37-6b10-5a4f-8f0b-2da52ae5008f" 2025-06-22 11:51:40.369199 | orchestrator |  }, 2025-06-22 11:51:40.370001 | orchestrator |  "sdc": { 2025-06-22 11:51:40.371013 | orchestrator |  "osd_lvm_uuid": "0b51a6ec-8722-57c7-ad6b-56758d62ede6" 2025-06-22 11:51:40.374887 | orchestrator |  } 2025-06-22 11:51:40.375700 | orchestrator |  }, 2025-06-22 11:51:40.376255 | orchestrator |  "lvm_volumes": [ 2025-06-22 11:51:40.376785 | orchestrator |  { 2025-06-22 11:51:40.378927 | orchestrator |  "data": "osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f", 2025-06-22 11:51:40.380013 | orchestrator |  "data_vg": "ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f" 2025-06-22 11:51:40.380236 | orchestrator |  }, 2025-06-22 11:51:40.380991 | orchestrator |  { 2025-06-22 11:51:40.381357 | orchestrator |  "data": "osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6", 2025-06-22 11:51:40.382075 | orchestrator |  "data_vg": "ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6" 2025-06-22 11:51:40.382516 | orchestrator |  } 2025-06-22 11:51:40.383093 | orchestrator |  ] 2025-06-22 11:51:40.384636 | orchestrator |  } 2025-06-22 11:51:40.385010 | orchestrator | } 2025-06-22 11:51:40.385807 | orchestrator | 2025-06-22 11:51:40.386479 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-22 11:51:40.387147 | orchestrator | Sunday 22 June 2025 11:51:40 +0000 (0:00:00.211) 0:00:13.497 *********** 2025-06-22 11:51:42.603410 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 11:51:42.606664 | orchestrator | 2025-06-22 11:51:42.609271 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-22 11:51:42.609809 | orchestrator | 2025-06-22 11:51:42.610647 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 11:51:42.611144 | orchestrator | Sunday 22 June 2025 11:51:42 +0000 (0:00:02.242) 0:00:15.739 *********** 2025-06-22 11:51:42.860569 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-22 11:51:42.860730 | orchestrator | 2025-06-22 11:51:42.860746 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 11:51:42.860758 | orchestrator | Sunday 22 June 2025 11:51:42 +0000 (0:00:00.256) 0:00:15.996 *********** 2025-06-22 11:51:43.118210 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:51:43.118344 | orchestrator | 2025-06-22 11:51:43.118380 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:43.121281 | orchestrator | Sunday 22 June 2025 11:51:43 +0000 (0:00:00.259) 0:00:16.255 *********** 2025-06-22 11:51:43.460057 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-22 11:51:43.460516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-22 11:51:43.461339 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-22 11:51:43.462217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-22 11:51:43.465630 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-22 11:51:43.465735 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-22 11:51:43.465818 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-22 11:51:43.466373 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-22 11:51:43.466641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-22 11:51:43.467008 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-22 11:51:43.467407 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-22 11:51:43.470105 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-22 11:51:43.470237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-22 11:51:43.470670 | orchestrator | 2025-06-22 11:51:43.471102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:43.472886 | orchestrator | Sunday 22 June 2025 11:51:43 +0000 (0:00:00.341) 0:00:16.597 *********** 2025-06-22 11:51:43.665422 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:43.666190 | orchestrator | 2025-06-22 11:51:43.667493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:43.668117 | orchestrator | Sunday 22 June 2025 11:51:43 +0000 (0:00:00.204) 0:00:16.802 *********** 2025-06-22 11:51:43.865818 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:43.866170 | orchestrator | 2025-06-22 11:51:43.867666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:43.868735 | orchestrator | Sunday 22 June 2025 11:51:43 +0000 (0:00:00.198) 0:00:17.000 *********** 2025-06-22 11:51:44.052136 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:44.052925 | orchestrator | 2025-06-22 11:51:44.054774 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:44.056187 | orchestrator | Sunday 22 June 2025 11:51:44 +0000 (0:00:00.190) 0:00:17.190 *********** 2025-06-22 11:51:44.252322 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:44.252431 | orchestrator | 2025-06-22 11:51:44.252556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:44.256276 | orchestrator | Sunday 22 June 2025 11:51:44 +0000 (0:00:00.197) 0:00:17.388 *********** 2025-06-22 11:51:44.874514 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:44.875909 | orchestrator | 2025-06-22 11:51:44.875966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:44.876751 | orchestrator | Sunday 22 June 2025 11:51:44 +0000 (0:00:00.620) 0:00:18.008 *********** 2025-06-22 11:51:45.077526 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:45.083142 | orchestrator | 2025-06-22 11:51:45.083211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:45.083238 | orchestrator | Sunday 22 June 2025 11:51:45 +0000 (0:00:00.205) 0:00:18.214 *********** 2025-06-22 11:51:45.291403 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:45.291497 | orchestrator | 2025-06-22 11:51:45.291515 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:45.294417 | orchestrator | Sunday 22 June 2025 11:51:45 +0000 (0:00:00.212) 0:00:18.427 *********** 2025-06-22 11:51:45.518068 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:45.519250 | orchestrator | 2025-06-22 11:51:45.521559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:45.523409 | orchestrator | Sunday 22 June 2025 11:51:45 +0000 (0:00:00.228) 0:00:18.655 *********** 2025-06-22 11:51:45.948078 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48) 2025-06-22 11:51:45.949295 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48) 2025-06-22 11:51:45.950362 | orchestrator | 2025-06-22 11:51:45.955798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:45.957092 | orchestrator | Sunday 22 June 2025 11:51:45 +0000 (0:00:00.430) 0:00:19.086 *********** 2025-06-22 11:51:46.375535 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_95ca9be4-ae4c-4603-a11a-c98b5f55b273) 2025-06-22 11:51:46.377280 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_95ca9be4-ae4c-4603-a11a-c98b5f55b273) 2025-06-22 11:51:46.378699 | orchestrator | 2025-06-22 11:51:46.382811 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:46.383463 | orchestrator | Sunday 22 June 2025 11:51:46 +0000 (0:00:00.426) 0:00:19.513 *********** 2025-06-22 11:51:46.795465 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_899f0377-b87c-421a-9d44-3bd393f5c125) 2025-06-22 11:51:46.797165 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_899f0377-b87c-421a-9d44-3bd393f5c125) 2025-06-22 11:51:46.798971 | orchestrator | 2025-06-22 11:51:46.800146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:46.801613 | orchestrator | Sunday 22 June 2025 11:51:46 +0000 (0:00:00.418) 0:00:19.931 *********** 2025-06-22 11:51:47.219677 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_060f7999-6812-4095-99a7-aa228581a5cf) 2025-06-22 11:51:47.219855 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_060f7999-6812-4095-99a7-aa228581a5cf) 2025-06-22 11:51:47.220373 | orchestrator | 2025-06-22 11:51:47.220775 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:47.221069 | orchestrator | Sunday 22 June 2025 11:51:47 +0000 (0:00:00.427) 0:00:20.359 *********** 2025-06-22 11:51:47.552212 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 11:51:47.552446 | orchestrator | 2025-06-22 11:51:47.553835 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:47.554150 | orchestrator | Sunday 22 June 2025 11:51:47 +0000 (0:00:00.331) 0:00:20.690 *********** 2025-06-22 11:51:47.921821 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-22 11:51:47.924211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-22 11:51:47.925781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-22 11:51:47.926831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-22 11:51:47.927533 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-22 11:51:47.928156 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-22 11:51:47.928904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-22 11:51:47.929395 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-22 11:51:47.930121 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-22 11:51:47.930752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-22 11:51:47.931320 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-22 11:51:47.931968 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-22 11:51:47.932397 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-22 11:51:47.933253 | orchestrator | 2025-06-22 11:51:47.933902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:47.934172 | orchestrator | Sunday 22 June 2025 11:51:47 +0000 (0:00:00.368) 0:00:21.059 *********** 2025-06-22 11:51:48.128561 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:48.129293 | orchestrator | 2025-06-22 11:51:48.130417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:48.131881 | orchestrator | Sunday 22 June 2025 11:51:48 +0000 (0:00:00.206) 0:00:21.265 *********** 2025-06-22 11:51:48.807095 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:48.807662 | orchestrator | 2025-06-22 11:51:48.808470 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:48.809270 | orchestrator | Sunday 22 June 2025 11:51:48 +0000 (0:00:00.678) 0:00:21.944 *********** 2025-06-22 11:51:48.995919 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:48.996263 | orchestrator | 2025-06-22 11:51:48.997316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:48.998396 | orchestrator | Sunday 22 June 2025 11:51:48 +0000 (0:00:00.189) 0:00:22.133 *********** 2025-06-22 11:51:49.189825 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:49.190491 | orchestrator | 2025-06-22 11:51:49.191413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:49.192658 | orchestrator | Sunday 22 June 2025 11:51:49 +0000 (0:00:00.194) 0:00:22.328 *********** 2025-06-22 11:51:49.394135 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:49.395130 | orchestrator | 2025-06-22 11:51:49.396509 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:49.398367 | orchestrator | Sunday 22 June 2025 11:51:49 +0000 (0:00:00.204) 0:00:22.533 *********** 2025-06-22 11:51:49.590921 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:49.592940 | orchestrator | 2025-06-22 11:51:49.598938 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:49.600108 | orchestrator | Sunday 22 June 2025 11:51:49 +0000 (0:00:00.196) 0:00:22.729 *********** 2025-06-22 11:51:49.795617 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:49.796327 | orchestrator | 2025-06-22 11:51:49.799969 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:49.799997 | orchestrator | Sunday 22 June 2025 11:51:49 +0000 (0:00:00.202) 0:00:22.931 *********** 2025-06-22 11:51:49.993119 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:49.993243 | orchestrator | 2025-06-22 11:51:49.993257 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:49.995941 | orchestrator | Sunday 22 June 2025 11:51:49 +0000 (0:00:00.196) 0:00:23.128 *********** 2025-06-22 11:51:50.605379 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-22 11:51:50.605540 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-22 11:51:50.608082 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-22 11:51:50.608605 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-22 11:51:50.609473 | orchestrator | 2025-06-22 11:51:50.610715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:50.610931 | orchestrator | Sunday 22 June 2025 11:51:50 +0000 (0:00:00.610) 0:00:23.739 *********** 2025-06-22 11:51:50.834128 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:50.835651 | orchestrator | 2025-06-22 11:51:50.835955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:50.839843 | orchestrator | Sunday 22 June 2025 11:51:50 +0000 (0:00:00.227) 0:00:23.966 *********** 2025-06-22 11:51:51.064266 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:51.065746 | orchestrator | 2025-06-22 11:51:51.066381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:51.066686 | orchestrator | Sunday 22 June 2025 11:51:51 +0000 (0:00:00.234) 0:00:24.201 *********** 2025-06-22 11:51:51.266258 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:51.266493 | orchestrator | 2025-06-22 11:51:51.267999 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:51:51.268832 | orchestrator | Sunday 22 June 2025 11:51:51 +0000 (0:00:00.203) 0:00:24.404 *********** 2025-06-22 11:51:51.464811 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:51.465874 | orchestrator | 2025-06-22 11:51:51.466444 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-22 11:51:51.470012 | orchestrator | Sunday 22 June 2025 11:51:51 +0000 (0:00:00.198) 0:00:24.603 *********** 2025-06-22 11:51:51.814284 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-06-22 11:51:51.817535 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-06-22 11:51:51.817786 | orchestrator | 2025-06-22 11:51:51.818700 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-22 11:51:51.818945 | orchestrator | Sunday 22 June 2025 11:51:51 +0000 (0:00:00.344) 0:00:24.947 *********** 2025-06-22 11:51:51.997100 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:51.997240 | orchestrator | 2025-06-22 11:51:51.998098 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-22 11:51:51.998720 | orchestrator | Sunday 22 June 2025 11:51:51 +0000 (0:00:00.186) 0:00:25.134 *********** 2025-06-22 11:51:52.129495 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:52.133282 | orchestrator | 2025-06-22 11:51:52.133688 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-22 11:51:52.134715 | orchestrator | Sunday 22 June 2025 11:51:52 +0000 (0:00:00.134) 0:00:25.268 *********** 2025-06-22 11:51:52.272707 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:52.273710 | orchestrator | 2025-06-22 11:51:52.274659 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-22 11:51:52.275536 | orchestrator | Sunday 22 June 2025 11:51:52 +0000 (0:00:00.138) 0:00:25.407 *********** 2025-06-22 11:51:52.431494 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:51:52.431683 | orchestrator | 2025-06-22 11:51:52.432411 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-22 11:51:52.432789 | orchestrator | Sunday 22 June 2025 11:51:52 +0000 (0:00:00.161) 0:00:25.569 *********** 2025-06-22 11:51:52.594210 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd90edff2-979c-5e5e-98e2-f02394d35fb4'}}) 2025-06-22 11:51:52.595242 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9de1692c-afc0-5cdb-8a59-e564d6a096fc'}}) 2025-06-22 11:51:52.597105 | orchestrator | 2025-06-22 11:51:52.601783 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-22 11:51:52.601919 | orchestrator | Sunday 22 June 2025 11:51:52 +0000 (0:00:00.162) 0:00:25.732 *********** 2025-06-22 11:51:52.750359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd90edff2-979c-5e5e-98e2-f02394d35fb4'}})  2025-06-22 11:51:52.751436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9de1692c-afc0-5cdb-8a59-e564d6a096fc'}})  2025-06-22 11:51:52.752537 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:52.756632 | orchestrator | 2025-06-22 11:51:52.757310 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-22 11:51:52.757951 | orchestrator | Sunday 22 June 2025 11:51:52 +0000 (0:00:00.155) 0:00:25.888 *********** 2025-06-22 11:51:52.906778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd90edff2-979c-5e5e-98e2-f02394d35fb4'}})  2025-06-22 11:51:52.909932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9de1692c-afc0-5cdb-8a59-e564d6a096fc'}})  2025-06-22 11:51:52.911166 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:52.912076 | orchestrator | 2025-06-22 11:51:52.916767 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-22 11:51:52.917560 | orchestrator | Sunday 22 June 2025 11:51:52 +0000 (0:00:00.155) 0:00:26.043 *********** 2025-06-22 11:51:53.073214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd90edff2-979c-5e5e-98e2-f02394d35fb4'}})  2025-06-22 11:51:53.073333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9de1692c-afc0-5cdb-8a59-e564d6a096fc'}})  2025-06-22 11:51:53.075143 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:53.078326 | orchestrator | 2025-06-22 11:51:53.080507 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-22 11:51:53.081593 | orchestrator | Sunday 22 June 2025 11:51:53 +0000 (0:00:00.166) 0:00:26.210 *********** 2025-06-22 11:51:53.214195 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:51:53.214334 | orchestrator | 2025-06-22 11:51:53.214620 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-22 11:51:53.346114 | orchestrator | Sunday 22 June 2025 11:51:53 +0000 (0:00:00.142) 0:00:26.353 *********** 2025-06-22 11:51:53.356922 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:51:53.358125 | orchestrator | 2025-06-22 11:51:53.359264 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-22 11:51:53.360342 | orchestrator | Sunday 22 June 2025 11:51:53 +0000 (0:00:00.142) 0:00:26.495 *********** 2025-06-22 11:51:53.487752 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:53.488945 | orchestrator | 2025-06-22 11:51:53.490442 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-22 11:51:53.491208 | orchestrator | Sunday 22 June 2025 11:51:53 +0000 (0:00:00.130) 0:00:26.626 *********** 2025-06-22 11:51:53.855697 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:53.856132 | orchestrator | 2025-06-22 11:51:53.856757 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-22 11:51:53.857888 | orchestrator | Sunday 22 June 2025 11:51:53 +0000 (0:00:00.367) 0:00:26.993 *********** 2025-06-22 11:51:53.998449 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:53.999430 | orchestrator | 2025-06-22 11:51:54.000824 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-22 11:51:54.005063 | orchestrator | Sunday 22 June 2025 11:51:53 +0000 (0:00:00.143) 0:00:27.137 *********** 2025-06-22 11:51:54.152126 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 11:51:54.152293 | orchestrator |  "ceph_osd_devices": { 2025-06-22 11:51:54.155923 | orchestrator |  "sdb": { 2025-06-22 11:51:54.158207 | orchestrator |  "osd_lvm_uuid": "d90edff2-979c-5e5e-98e2-f02394d35fb4" 2025-06-22 11:51:54.158607 | orchestrator |  }, 2025-06-22 11:51:54.159345 | orchestrator |  "sdc": { 2025-06-22 11:51:54.160131 | orchestrator |  "osd_lvm_uuid": "9de1692c-afc0-5cdb-8a59-e564d6a096fc" 2025-06-22 11:51:54.161503 | orchestrator |  } 2025-06-22 11:51:54.165407 | orchestrator |  } 2025-06-22 11:51:54.165740 | orchestrator | } 2025-06-22 11:51:54.166222 | orchestrator | 2025-06-22 11:51:54.166622 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-22 11:51:54.167152 | orchestrator | Sunday 22 June 2025 11:51:54 +0000 (0:00:00.149) 0:00:27.287 *********** 2025-06-22 11:51:54.311633 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:54.312838 | orchestrator | 2025-06-22 11:51:54.318002 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-22 11:51:54.319724 | orchestrator | Sunday 22 June 2025 11:51:54 +0000 (0:00:00.161) 0:00:27.448 *********** 2025-06-22 11:51:54.467433 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:54.469237 | orchestrator | 2025-06-22 11:51:54.470564 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-22 11:51:54.471885 | orchestrator | Sunday 22 June 2025 11:51:54 +0000 (0:00:00.154) 0:00:27.603 *********** 2025-06-22 11:51:54.595817 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:51:54.596871 | orchestrator | 2025-06-22 11:51:54.599044 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-22 11:51:54.599764 | orchestrator | Sunday 22 June 2025 11:51:54 +0000 (0:00:00.129) 0:00:27.733 *********** 2025-06-22 11:51:54.812566 | orchestrator | changed: [testbed-node-4] => { 2025-06-22 11:51:54.814732 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-22 11:51:54.816492 | orchestrator |  "ceph_osd_devices": { 2025-06-22 11:51:54.818268 | orchestrator |  "sdb": { 2025-06-22 11:51:54.819565 | orchestrator |  "osd_lvm_uuid": "d90edff2-979c-5e5e-98e2-f02394d35fb4" 2025-06-22 11:51:54.821677 | orchestrator |  }, 2025-06-22 11:51:54.822473 | orchestrator |  "sdc": { 2025-06-22 11:51:54.823215 | orchestrator |  "osd_lvm_uuid": "9de1692c-afc0-5cdb-8a59-e564d6a096fc" 2025-06-22 11:51:54.823748 | orchestrator |  } 2025-06-22 11:51:54.824333 | orchestrator |  }, 2025-06-22 11:51:54.824843 | orchestrator |  "lvm_volumes": [ 2025-06-22 11:51:54.825486 | orchestrator |  { 2025-06-22 11:51:54.825957 | orchestrator |  "data": "osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4", 2025-06-22 11:51:54.826273 | orchestrator |  "data_vg": "ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4" 2025-06-22 11:51:54.826893 | orchestrator |  }, 2025-06-22 11:51:54.827185 | orchestrator |  { 2025-06-22 11:51:54.827845 | orchestrator |  "data": "osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc", 2025-06-22 11:51:54.828284 | orchestrator |  "data_vg": "ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc" 2025-06-22 11:51:54.828642 | orchestrator |  } 2025-06-22 11:51:54.829066 | orchestrator |  ] 2025-06-22 11:51:54.829681 | orchestrator |  } 2025-06-22 11:51:54.829905 | orchestrator | } 2025-06-22 11:51:54.830271 | orchestrator | 2025-06-22 11:51:54.830734 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-22 11:51:54.831207 | orchestrator | Sunday 22 June 2025 11:51:54 +0000 (0:00:00.216) 0:00:27.949 *********** 2025-06-22 11:51:56.040069 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-22 11:51:56.041016 | orchestrator | 2025-06-22 11:51:56.041828 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-22 11:51:56.042458 | orchestrator | 2025-06-22 11:51:56.043159 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 11:51:56.045269 | orchestrator | Sunday 22 June 2025 11:51:56 +0000 (0:00:01.227) 0:00:29.177 *********** 2025-06-22 11:51:56.493409 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-22 11:51:56.493918 | orchestrator | 2025-06-22 11:51:56.494760 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 11:51:56.496015 | orchestrator | Sunday 22 June 2025 11:51:56 +0000 (0:00:00.455) 0:00:29.632 *********** 2025-06-22 11:51:57.118089 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:51:57.119484 | orchestrator | 2025-06-22 11:51:57.120647 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:57.120708 | orchestrator | Sunday 22 June 2025 11:51:57 +0000 (0:00:00.617) 0:00:30.249 *********** 2025-06-22 11:51:57.483220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-22 11:51:57.483933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-22 11:51:57.485004 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-22 11:51:57.487641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-22 11:51:57.487691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-22 11:51:57.488461 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-22 11:51:57.489357 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-22 11:51:57.490649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-22 11:51:57.491248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-22 11:51:57.492036 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-22 11:51:57.492794 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-22 11:51:57.493624 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-22 11:51:57.494446 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-22 11:51:57.495097 | orchestrator | 2025-06-22 11:51:57.495656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:57.496076 | orchestrator | Sunday 22 June 2025 11:51:57 +0000 (0:00:00.371) 0:00:30.621 *********** 2025-06-22 11:51:57.698841 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:51:57.699510 | orchestrator | 2025-06-22 11:51:57.701064 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:57.701883 | orchestrator | Sunday 22 June 2025 11:51:57 +0000 (0:00:00.215) 0:00:30.836 *********** 2025-06-22 11:51:57.904166 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:51:57.904730 | orchestrator | 2025-06-22 11:51:57.905227 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:57.906260 | orchestrator | Sunday 22 June 2025 11:51:57 +0000 (0:00:00.205) 0:00:31.041 *********** 2025-06-22 11:51:58.099272 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:51:58.099373 | orchestrator | 2025-06-22 11:51:58.100222 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:58.100943 | orchestrator | Sunday 22 June 2025 11:51:58 +0000 (0:00:00.191) 0:00:31.233 *********** 2025-06-22 11:51:58.309083 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:51:58.309209 | orchestrator | 2025-06-22 11:51:58.309225 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:58.309772 | orchestrator | Sunday 22 June 2025 11:51:58 +0000 (0:00:00.214) 0:00:31.447 *********** 2025-06-22 11:51:58.512850 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:51:58.513035 | orchestrator | 2025-06-22 11:51:58.513132 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:58.514180 | orchestrator | Sunday 22 June 2025 11:51:58 +0000 (0:00:00.202) 0:00:31.650 *********** 2025-06-22 11:51:58.729464 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:51:58.730176 | orchestrator | 2025-06-22 11:51:58.730981 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:58.731402 | orchestrator | Sunday 22 June 2025 11:51:58 +0000 (0:00:00.217) 0:00:31.868 *********** 2025-06-22 11:51:58.913750 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:51:58.914617 | orchestrator | 2025-06-22 11:51:58.914948 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:58.915932 | orchestrator | Sunday 22 June 2025 11:51:58 +0000 (0:00:00.183) 0:00:32.051 *********** 2025-06-22 11:51:59.129125 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:51:59.129331 | orchestrator | 2025-06-22 11:51:59.130153 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:59.131329 | orchestrator | Sunday 22 June 2025 11:51:59 +0000 (0:00:00.212) 0:00:32.264 *********** 2025-06-22 11:51:59.724151 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033) 2025-06-22 11:51:59.725475 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033) 2025-06-22 11:51:59.726629 | orchestrator | 2025-06-22 11:51:59.727471 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:51:59.728422 | orchestrator | Sunday 22 June 2025 11:51:59 +0000 (0:00:00.598) 0:00:32.862 *********** 2025-06-22 11:52:00.672752 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0234f42c-6d02-44b8-b796-e801f7c6659f) 2025-06-22 11:52:00.673733 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0234f42c-6d02-44b8-b796-e801f7c6659f) 2025-06-22 11:52:00.675064 | orchestrator | 2025-06-22 11:52:00.675158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:52:00.675819 | orchestrator | Sunday 22 June 2025 11:52:00 +0000 (0:00:00.947) 0:00:33.809 *********** 2025-06-22 11:52:01.099051 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a273c01c-52c4-42f8-a181-d91a87ff3a5e) 2025-06-22 11:52:01.099420 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a273c01c-52c4-42f8-a181-d91a87ff3a5e) 2025-06-22 11:52:01.099992 | orchestrator | 2025-06-22 11:52:01.101096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:52:01.102183 | orchestrator | Sunday 22 June 2025 11:52:01 +0000 (0:00:00.426) 0:00:34.236 *********** 2025-06-22 11:52:01.547818 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a129606c-fab1-48ed-9350-9d2eafddbd52) 2025-06-22 11:52:01.548318 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a129606c-fab1-48ed-9350-9d2eafddbd52) 2025-06-22 11:52:01.549437 | orchestrator | 2025-06-22 11:52:01.550231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:52:01.551360 | orchestrator | Sunday 22 June 2025 11:52:01 +0000 (0:00:00.449) 0:00:34.685 *********** 2025-06-22 11:52:01.867537 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 11:52:01.868091 | orchestrator | 2025-06-22 11:52:01.868627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:01.869093 | orchestrator | Sunday 22 June 2025 11:52:01 +0000 (0:00:00.320) 0:00:35.006 *********** 2025-06-22 11:52:02.260972 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-22 11:52:02.261374 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-22 11:52:02.262893 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-22 11:52:02.263856 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-22 11:52:02.265878 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-22 11:52:02.266272 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-22 11:52:02.267241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-22 11:52:02.267937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-22 11:52:02.268683 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-22 11:52:02.269257 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-22 11:52:02.270164 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-22 11:52:02.270416 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-22 11:52:02.271093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-22 11:52:02.271760 | orchestrator | 2025-06-22 11:52:02.272056 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:02.272685 | orchestrator | Sunday 22 June 2025 11:52:02 +0000 (0:00:00.391) 0:00:35.398 *********** 2025-06-22 11:52:02.469534 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:02.469699 | orchestrator | 2025-06-22 11:52:02.470552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:02.471344 | orchestrator | Sunday 22 June 2025 11:52:02 +0000 (0:00:00.209) 0:00:35.607 *********** 2025-06-22 11:52:02.665982 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:02.667094 | orchestrator | 2025-06-22 11:52:02.668080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:02.668711 | orchestrator | Sunday 22 June 2025 11:52:02 +0000 (0:00:00.196) 0:00:35.803 *********** 2025-06-22 11:52:02.877266 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:02.878067 | orchestrator | 2025-06-22 11:52:02.878791 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:02.879774 | orchestrator | Sunday 22 June 2025 11:52:02 +0000 (0:00:00.211) 0:00:36.015 *********** 2025-06-22 11:52:03.081440 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:03.081852 | orchestrator | 2025-06-22 11:52:03.082716 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:03.083655 | orchestrator | Sunday 22 June 2025 11:52:03 +0000 (0:00:00.204) 0:00:36.219 *********** 2025-06-22 11:52:03.280629 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:03.280951 | orchestrator | 2025-06-22 11:52:03.281902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:03.283548 | orchestrator | Sunday 22 June 2025 11:52:03 +0000 (0:00:00.199) 0:00:36.418 *********** 2025-06-22 11:52:03.995165 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:03.996034 | orchestrator | 2025-06-22 11:52:03.998394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:03.998490 | orchestrator | Sunday 22 June 2025 11:52:03 +0000 (0:00:00.713) 0:00:37.132 *********** 2025-06-22 11:52:04.206741 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:04.207606 | orchestrator | 2025-06-22 11:52:04.208510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:04.209344 | orchestrator | Sunday 22 June 2025 11:52:04 +0000 (0:00:00.212) 0:00:37.344 *********** 2025-06-22 11:52:04.426725 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:04.426938 | orchestrator | 2025-06-22 11:52:04.430133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:04.430167 | orchestrator | Sunday 22 June 2025 11:52:04 +0000 (0:00:00.217) 0:00:37.562 *********** 2025-06-22 11:52:05.051107 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-22 11:52:05.052115 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-22 11:52:05.053126 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-22 11:52:05.053770 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-22 11:52:05.054751 | orchestrator | 2025-06-22 11:52:05.055793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:05.056194 | orchestrator | Sunday 22 June 2025 11:52:05 +0000 (0:00:00.627) 0:00:38.189 *********** 2025-06-22 11:52:05.255483 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:05.255638 | orchestrator | 2025-06-22 11:52:05.256958 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:05.258836 | orchestrator | Sunday 22 June 2025 11:52:05 +0000 (0:00:00.202) 0:00:38.392 *********** 2025-06-22 11:52:05.459801 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:05.459904 | orchestrator | 2025-06-22 11:52:05.460670 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:05.460960 | orchestrator | Sunday 22 June 2025 11:52:05 +0000 (0:00:00.204) 0:00:38.596 *********** 2025-06-22 11:52:05.667750 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:05.669142 | orchestrator | 2025-06-22 11:52:05.672048 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:05.672081 | orchestrator | Sunday 22 June 2025 11:52:05 +0000 (0:00:00.208) 0:00:38.805 *********** 2025-06-22 11:52:05.856467 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:05.857120 | orchestrator | 2025-06-22 11:52:05.857672 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-22 11:52:05.858215 | orchestrator | Sunday 22 June 2025 11:52:05 +0000 (0:00:00.189) 0:00:38.994 *********** 2025-06-22 11:52:06.035092 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-06-22 11:52:06.036187 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-06-22 11:52:06.036281 | orchestrator | 2025-06-22 11:52:06.037185 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-22 11:52:06.037406 | orchestrator | Sunday 22 June 2025 11:52:06 +0000 (0:00:00.178) 0:00:39.172 *********** 2025-06-22 11:52:06.205463 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:06.206632 | orchestrator | 2025-06-22 11:52:06.207751 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-22 11:52:06.208901 | orchestrator | Sunday 22 June 2025 11:52:06 +0000 (0:00:00.170) 0:00:39.343 *********** 2025-06-22 11:52:06.356886 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:06.357893 | orchestrator | 2025-06-22 11:52:06.359300 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-22 11:52:06.360667 | orchestrator | Sunday 22 June 2025 11:52:06 +0000 (0:00:00.150) 0:00:39.493 *********** 2025-06-22 11:52:06.504714 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:06.505779 | orchestrator | 2025-06-22 11:52:06.507808 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-22 11:52:06.509400 | orchestrator | Sunday 22 June 2025 11:52:06 +0000 (0:00:00.147) 0:00:39.641 *********** 2025-06-22 11:52:06.930408 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:52:06.931608 | orchestrator | 2025-06-22 11:52:06.931696 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-22 11:52:06.932769 | orchestrator | Sunday 22 June 2025 11:52:06 +0000 (0:00:00.425) 0:00:40.067 *********** 2025-06-22 11:52:07.122093 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8a4028de-648e-5a19-94a5-5dc0f00dede1'}}) 2025-06-22 11:52:07.123070 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1d622d46-9f3b-5fb0-a039-cce126484330'}}) 2025-06-22 11:52:07.123537 | orchestrator | 2025-06-22 11:52:07.125149 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-22 11:52:07.127492 | orchestrator | Sunday 22 June 2025 11:52:07 +0000 (0:00:00.191) 0:00:40.259 *********** 2025-06-22 11:52:07.305718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8a4028de-648e-5a19-94a5-5dc0f00dede1'}})  2025-06-22 11:52:07.305906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1d622d46-9f3b-5fb0-a039-cce126484330'}})  2025-06-22 11:52:07.307090 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:07.309386 | orchestrator | 2025-06-22 11:52:07.310480 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-22 11:52:07.311896 | orchestrator | Sunday 22 June 2025 11:52:07 +0000 (0:00:00.183) 0:00:40.442 *********** 2025-06-22 11:52:07.484676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8a4028de-648e-5a19-94a5-5dc0f00dede1'}})  2025-06-22 11:52:07.487230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1d622d46-9f3b-5fb0-a039-cce126484330'}})  2025-06-22 11:52:07.488233 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:07.490437 | orchestrator | 2025-06-22 11:52:07.494220 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-22 11:52:07.495749 | orchestrator | Sunday 22 June 2025 11:52:07 +0000 (0:00:00.179) 0:00:40.621 *********** 2025-06-22 11:52:07.625457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8a4028de-648e-5a19-94a5-5dc0f00dede1'}})  2025-06-22 11:52:07.626224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1d622d46-9f3b-5fb0-a039-cce126484330'}})  2025-06-22 11:52:07.627215 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:07.628883 | orchestrator | 2025-06-22 11:52:07.629671 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-22 11:52:07.630784 | orchestrator | Sunday 22 June 2025 11:52:07 +0000 (0:00:00.140) 0:00:40.762 *********** 2025-06-22 11:52:07.752327 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:52:07.752549 | orchestrator | 2025-06-22 11:52:07.753747 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-22 11:52:07.754733 | orchestrator | Sunday 22 June 2025 11:52:07 +0000 (0:00:00.127) 0:00:40.890 *********** 2025-06-22 11:52:07.878252 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:52:07.878987 | orchestrator | 2025-06-22 11:52:07.879907 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-22 11:52:07.880439 | orchestrator | Sunday 22 June 2025 11:52:07 +0000 (0:00:00.126) 0:00:41.017 *********** 2025-06-22 11:52:08.018824 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:08.019670 | orchestrator | 2025-06-22 11:52:08.020077 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-22 11:52:08.020556 | orchestrator | Sunday 22 June 2025 11:52:08 +0000 (0:00:00.139) 0:00:41.156 *********** 2025-06-22 11:52:08.175694 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:08.176282 | orchestrator | 2025-06-22 11:52:08.176988 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-22 11:52:08.177669 | orchestrator | Sunday 22 June 2025 11:52:08 +0000 (0:00:00.158) 0:00:41.314 *********** 2025-06-22 11:52:08.316101 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:08.317826 | orchestrator | 2025-06-22 11:52:08.318860 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-22 11:52:08.320676 | orchestrator | Sunday 22 June 2025 11:52:08 +0000 (0:00:00.135) 0:00:41.450 *********** 2025-06-22 11:52:08.477989 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 11:52:08.478628 | orchestrator |  "ceph_osd_devices": { 2025-06-22 11:52:08.479714 | orchestrator |  "sdb": { 2025-06-22 11:52:08.480907 | orchestrator |  "osd_lvm_uuid": "8a4028de-648e-5a19-94a5-5dc0f00dede1" 2025-06-22 11:52:08.481727 | orchestrator |  }, 2025-06-22 11:52:08.483225 | orchestrator |  "sdc": { 2025-06-22 11:52:08.484297 | orchestrator |  "osd_lvm_uuid": "1d622d46-9f3b-5fb0-a039-cce126484330" 2025-06-22 11:52:08.486481 | orchestrator |  } 2025-06-22 11:52:08.488075 | orchestrator |  } 2025-06-22 11:52:08.489392 | orchestrator | } 2025-06-22 11:52:08.491617 | orchestrator | 2025-06-22 11:52:08.493058 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-22 11:52:08.493886 | orchestrator | Sunday 22 June 2025 11:52:08 +0000 (0:00:00.165) 0:00:41.615 *********** 2025-06-22 11:52:08.620728 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:08.620900 | orchestrator | 2025-06-22 11:52:08.622367 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-22 11:52:08.623473 | orchestrator | Sunday 22 June 2025 11:52:08 +0000 (0:00:00.142) 0:00:41.758 *********** 2025-06-22 11:52:08.947787 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:08.948452 | orchestrator | 2025-06-22 11:52:08.949187 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-22 11:52:08.950369 | orchestrator | Sunday 22 June 2025 11:52:08 +0000 (0:00:00.328) 0:00:42.086 *********** 2025-06-22 11:52:09.088529 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:52:09.090626 | orchestrator | 2025-06-22 11:52:09.090658 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-22 11:52:09.090719 | orchestrator | Sunday 22 June 2025 11:52:09 +0000 (0:00:00.137) 0:00:42.223 *********** 2025-06-22 11:52:09.288142 | orchestrator | changed: [testbed-node-5] => { 2025-06-22 11:52:09.288620 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-22 11:52:09.289065 | orchestrator |  "ceph_osd_devices": { 2025-06-22 11:52:09.290688 | orchestrator |  "sdb": { 2025-06-22 11:52:09.291549 | orchestrator |  "osd_lvm_uuid": "8a4028de-648e-5a19-94a5-5dc0f00dede1" 2025-06-22 11:52:09.292633 | orchestrator |  }, 2025-06-22 11:52:09.293704 | orchestrator |  "sdc": { 2025-06-22 11:52:09.294726 | orchestrator |  "osd_lvm_uuid": "1d622d46-9f3b-5fb0-a039-cce126484330" 2025-06-22 11:52:09.295297 | orchestrator |  } 2025-06-22 11:52:09.296140 | orchestrator |  }, 2025-06-22 11:52:09.297064 | orchestrator |  "lvm_volumes": [ 2025-06-22 11:52:09.297165 | orchestrator |  { 2025-06-22 11:52:09.297872 | orchestrator |  "data": "osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1", 2025-06-22 11:52:09.298394 | orchestrator |  "data_vg": "ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1" 2025-06-22 11:52:09.298774 | orchestrator |  }, 2025-06-22 11:52:09.299784 | orchestrator |  { 2025-06-22 11:52:09.300273 | orchestrator |  "data": "osd-block-1d622d46-9f3b-5fb0-a039-cce126484330", 2025-06-22 11:52:09.300369 | orchestrator |  "data_vg": "ceph-1d622d46-9f3b-5fb0-a039-cce126484330" 2025-06-22 11:52:09.301443 | orchestrator |  } 2025-06-22 11:52:09.301719 | orchestrator |  ] 2025-06-22 11:52:09.302252 | orchestrator |  } 2025-06-22 11:52:09.302806 | orchestrator | } 2025-06-22 11:52:09.302890 | orchestrator | 2025-06-22 11:52:09.303598 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-22 11:52:09.303623 | orchestrator | Sunday 22 June 2025 11:52:09 +0000 (0:00:00.202) 0:00:42.425 *********** 2025-06-22 11:52:10.353685 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-22 11:52:10.354102 | orchestrator | 2025-06-22 11:52:10.354874 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:52:10.355335 | orchestrator | 2025-06-22 11:52:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:52:10.355559 | orchestrator | 2025-06-22 11:52:10 | INFO  | Please wait and do not abort execution. 2025-06-22 11:52:10.356640 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-22 11:52:10.357322 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-22 11:52:10.358295 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-22 11:52:10.359076 | orchestrator | 2025-06-22 11:52:10.359650 | orchestrator | 2025-06-22 11:52:10.360237 | orchestrator | 2025-06-22 11:52:10.360921 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:52:10.361538 | orchestrator | Sunday 22 June 2025 11:52:10 +0000 (0:00:01.064) 0:00:43.490 *********** 2025-06-22 11:52:10.362081 | orchestrator | =============================================================================== 2025-06-22 11:52:10.362478 | orchestrator | Write configuration file ------------------------------------------------ 4.53s 2025-06-22 11:52:10.363532 | orchestrator | Add known partitions to the list of available block devices ------------- 1.16s 2025-06-22 11:52:10.363553 | orchestrator | Add known partitions to the list of available block devices ------------- 1.15s 2025-06-22 11:52:10.363912 | orchestrator | Add known links to the list of available block devices ------------------ 1.10s 2025-06-22 11:52:10.364623 | orchestrator | Get initial list of available block devices ----------------------------- 1.10s 2025-06-22 11:52:10.366599 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.96s 2025-06-22 11:52:10.367191 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2025-06-22 11:52:10.367779 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2025-06-22 11:52:10.368278 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.75s 2025-06-22 11:52:10.368817 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.73s 2025-06-22 11:52:10.369301 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.73s 2025-06-22 11:52:10.370076 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-06-22 11:52:10.370332 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-06-22 11:52:10.370714 | orchestrator | Set WAL devices config data --------------------------------------------- 0.65s 2025-06-22 11:52:10.371166 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-06-22 11:52:10.371644 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-06-22 11:52:10.372101 | orchestrator | Print configuration data ------------------------------------------------ 0.63s 2025-06-22 11:52:10.372660 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-06-22 11:52:10.373102 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-06-22 11:52:10.373386 | orchestrator | Print DB devices -------------------------------------------------------- 0.62s 2025-06-22 11:52:22.851425 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:52:22.851615 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:52:22.851648 | orchestrator | Registering Redlock._release_script 2025-06-22 11:52:22.913054 | orchestrator | 2025-06-22 11:52:22 | INFO  | Task 17746cb9-83c0-4325-8631-c72bf0a8f18b (sync inventory) is running in background. Output coming soon. 2025-06-22 11:52:40.324948 | orchestrator | 2025-06-22 11:52:24 | INFO  | Starting group_vars file reorganization 2025-06-22 11:52:40.325043 | orchestrator | 2025-06-22 11:52:24 | INFO  | Moved 0 file(s) to their respective directories 2025-06-22 11:52:40.325059 | orchestrator | 2025-06-22 11:52:24 | INFO  | Group_vars file reorganization completed 2025-06-22 11:52:40.325071 | orchestrator | 2025-06-22 11:52:26 | INFO  | Starting variable preparation from inventory 2025-06-22 11:52:40.325082 | orchestrator | 2025-06-22 11:52:27 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-06-22 11:52:40.325093 | orchestrator | 2025-06-22 11:52:27 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-06-22 11:52:40.325124 | orchestrator | 2025-06-22 11:52:27 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-06-22 11:52:40.325135 | orchestrator | 2025-06-22 11:52:27 | INFO  | 3 file(s) written, 6 host(s) processed 2025-06-22 11:52:40.325147 | orchestrator | 2025-06-22 11:52:27 | INFO  | Variable preparation completed: 2025-06-22 11:52:40.325157 | orchestrator | 2025-06-22 11:52:28 | INFO  | Starting inventory overwrite handling 2025-06-22 11:52:40.325168 | orchestrator | 2025-06-22 11:52:28 | INFO  | Handling group overwrites in 99-overwrite 2025-06-22 11:52:40.325179 | orchestrator | 2025-06-22 11:52:28 | INFO  | Removing group frr:children from 60-generic 2025-06-22 11:52:40.325190 | orchestrator | 2025-06-22 11:52:28 | INFO  | Removing group storage:children from 50-kolla 2025-06-22 11:52:40.325200 | orchestrator | 2025-06-22 11:52:28 | INFO  | Removing group netbird:children from 50-infrastruture 2025-06-22 11:52:40.325217 | orchestrator | 2025-06-22 11:52:28 | INFO  | Removing group ceph-rgw from 50-ceph 2025-06-22 11:52:40.325229 | orchestrator | 2025-06-22 11:52:28 | INFO  | Removing group ceph-mds from 50-ceph 2025-06-22 11:52:40.325240 | orchestrator | 2025-06-22 11:52:28 | INFO  | Handling group overwrites in 20-roles 2025-06-22 11:52:40.325250 | orchestrator | 2025-06-22 11:52:28 | INFO  | Removing group k3s_node from 50-infrastruture 2025-06-22 11:52:40.325261 | orchestrator | 2025-06-22 11:52:28 | INFO  | Removed 6 group(s) in total 2025-06-22 11:52:40.325272 | orchestrator | 2025-06-22 11:52:28 | INFO  | Inventory overwrite handling completed 2025-06-22 11:52:40.325283 | orchestrator | 2025-06-22 11:52:29 | INFO  | Starting merge of inventory files 2025-06-22 11:52:40.325294 | orchestrator | 2025-06-22 11:52:29 | INFO  | Inventory files merged successfully 2025-06-22 11:52:40.325305 | orchestrator | 2025-06-22 11:52:33 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-06-22 11:52:40.325316 | orchestrator | 2025-06-22 11:52:39 | INFO  | Successfully wrote ClusterShell configuration 2025-06-22 11:52:40.325326 | orchestrator | [master 8fbb8f0] 2025-06-22-11-52 2025-06-22 11:52:40.325338 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-06-22 11:52:41.997484 | orchestrator | 2025-06-22 11:52:41 | INFO  | Task 58162237-55af-48cc-b0d6-7597d4b3aa49 (ceph-create-lvm-devices) was prepared for execution. 2025-06-22 11:52:41.997648 | orchestrator | 2025-06-22 11:52:41 | INFO  | It takes a moment until task 58162237-55af-48cc-b0d6-7597d4b3aa49 (ceph-create-lvm-devices) has been started and output is visible here. 2025-06-22 11:52:45.446332 | orchestrator | 2025-06-22 11:52:45.447716 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-22 11:52:45.447750 | orchestrator | 2025-06-22 11:52:45.447998 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 11:52:45.448705 | orchestrator | Sunday 22 June 2025 11:52:45 +0000 (0:00:00.232) 0:00:00.232 *********** 2025-06-22 11:52:45.658255 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 11:52:45.658387 | orchestrator | 2025-06-22 11:52:45.658693 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 11:52:45.659789 | orchestrator | Sunday 22 June 2025 11:52:45 +0000 (0:00:00.214) 0:00:00.446 *********** 2025-06-22 11:52:45.869097 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:52:45.870070 | orchestrator | 2025-06-22 11:52:45.871074 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:52:45.871963 | orchestrator | Sunday 22 June 2025 11:52:45 +0000 (0:00:00.210) 0:00:00.657 *********** 2025-06-22 11:52:46.225799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-22 11:52:46.225867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-22 11:52:46.226696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-22 11:52:46.227840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-22 11:52:46.228819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-22 11:52:46.230486 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-22 11:52:46.231470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-22 11:52:46.233212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-22 11:52:46.233988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-22 11:52:46.234611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-22 11:52:46.235105 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-22 11:52:46.235710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-22 11:52:46.236199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-22 11:52:46.236749 | orchestrator | 2025-06-22 11:52:46.237667 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:52:46.238110 | orchestrator | Sunday 22 June 2025 11:52:46 +0000 (0:00:00.355) 0:00:01.012 *********** 2025-06-22 11:52:46.552148 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:46.552302 | orchestrator | 2025-06-22 11:52:46.553167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:52:46.553766 | orchestrator | Sunday 22 June 2025 11:52:46 +0000 (0:00:00.326) 0:00:01.339 *********** 2025-06-22 11:52:46.714807 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:46.715308 | orchestrator | 2025-06-22 11:52:46.716429 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:52:46.716875 | orchestrator | Sunday 22 June 2025 11:52:46 +0000 (0:00:00.163) 0:00:01.502 *********** 2025-06-22 11:52:46.883360 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:46.884247 | orchestrator | 2025-06-22 11:52:46.885718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:52:46.886498 | orchestrator | Sunday 22 June 2025 11:52:46 +0000 (0:00:00.169) 0:00:01.671 *********** 2025-06-22 11:52:47.055134 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:47.055314 | orchestrator | 2025-06-22 11:52:47.056552 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:52:47.056863 | orchestrator | Sunday 22 June 2025 11:52:47 +0000 (0:00:00.171) 0:00:01.843 *********** 2025-06-22 11:52:47.230237 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:47.230377 | orchestrator | 2025-06-22 11:52:47.230546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:52:47.230893 | orchestrator | Sunday 22 June 2025 11:52:47 +0000 (0:00:00.176) 0:00:02.019 *********** 2025-06-22 11:52:47.417338 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:47.417427 | orchestrator | 2025-06-22 11:52:47.417515 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:52:47.418451 | orchestrator | Sunday 22 June 2025 11:52:47 +0000 (0:00:00.186) 0:00:02.205 *********** 2025-06-22 11:52:47.600246 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:47.600394 | orchestrator | 2025-06-22 11:52:47.600885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:52:47.601047 | orchestrator | Sunday 22 June 2025 11:52:47 +0000 (0:00:00.180) 0:00:02.385 *********** 2025-06-22 11:52:47.761724 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:47.763101 | orchestrator | 2025-06-22 11:52:47.763919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:52:47.765957 | orchestrator | Sunday 22 June 2025 11:52:47 +0000 (0:00:00.163) 0:00:02.548 *********** 2025-06-22 11:52:48.135494 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41) 2025-06-22 11:52:48.135863 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41) 2025-06-22 11:52:48.137467 | orchestrator | 2025-06-22 11:52:48.138590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:52:48.139154 | orchestrator | Sunday 22 June 2025 11:52:48 +0000 (0:00:00.374) 0:00:02.923 *********** 2025-06-22 11:52:48.520180 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4b47f8cd-db2a-4bea-898d-3d48c49a84c2) 2025-06-22 11:52:48.520941 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4b47f8cd-db2a-4bea-898d-3d48c49a84c2) 2025-06-22 11:52:48.521499 | orchestrator | 2025-06-22 11:52:48.522639 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:52:48.523489 | orchestrator | Sunday 22 June 2025 11:52:48 +0000 (0:00:00.384) 0:00:03.308 *********** 2025-06-22 11:52:49.066088 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7610229b-d7bf-450f-9964-1d42e936a357) 2025-06-22 11:52:49.066295 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7610229b-d7bf-450f-9964-1d42e936a357) 2025-06-22 11:52:49.067436 | orchestrator | 2025-06-22 11:52:49.068026 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:52:49.068468 | orchestrator | Sunday 22 June 2025 11:52:49 +0000 (0:00:00.544) 0:00:03.852 *********** 2025-06-22 11:52:49.568937 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c288123e-75d1-4d08-8561-55f7fbbd7c1b) 2025-06-22 11:52:49.569077 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c288123e-75d1-4d08-8561-55f7fbbd7c1b) 2025-06-22 11:52:49.569824 | orchestrator | 2025-06-22 11:52:49.570090 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:52:49.570708 | orchestrator | Sunday 22 June 2025 11:52:49 +0000 (0:00:00.504) 0:00:04.357 *********** 2025-06-22 11:52:50.112239 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 11:52:50.112921 | orchestrator | 2025-06-22 11:52:50.113155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:50.113554 | orchestrator | Sunday 22 June 2025 11:52:50 +0000 (0:00:00.541) 0:00:04.899 *********** 2025-06-22 11:52:50.462617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-22 11:52:50.463277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-22 11:52:50.464613 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-22 11:52:50.464637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-22 11:52:50.465525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-22 11:52:50.466515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-22 11:52:50.467323 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-22 11:52:50.468646 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-22 11:52:50.468880 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-22 11:52:50.469421 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-22 11:52:50.470135 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-22 11:52:50.470698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-22 11:52:50.471336 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-22 11:52:50.471722 | orchestrator | 2025-06-22 11:52:50.472285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:50.472728 | orchestrator | Sunday 22 June 2025 11:52:50 +0000 (0:00:00.351) 0:00:05.250 *********** 2025-06-22 11:52:50.654469 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:50.655160 | orchestrator | 2025-06-22 11:52:50.657035 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:50.657063 | orchestrator | Sunday 22 June 2025 11:52:50 +0000 (0:00:00.190) 0:00:05.440 *********** 2025-06-22 11:52:50.842354 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:50.842460 | orchestrator | 2025-06-22 11:52:50.842799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:50.843703 | orchestrator | Sunday 22 June 2025 11:52:50 +0000 (0:00:00.189) 0:00:05.630 *********** 2025-06-22 11:52:51.024697 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:51.025341 | orchestrator | 2025-06-22 11:52:51.027631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:51.028336 | orchestrator | Sunday 22 June 2025 11:52:51 +0000 (0:00:00.181) 0:00:05.811 *********** 2025-06-22 11:52:51.245683 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:51.246080 | orchestrator | 2025-06-22 11:52:51.246739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:51.247903 | orchestrator | Sunday 22 June 2025 11:52:51 +0000 (0:00:00.222) 0:00:06.033 *********** 2025-06-22 11:52:51.444253 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:51.444710 | orchestrator | 2025-06-22 11:52:51.446222 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:51.446252 | orchestrator | Sunday 22 June 2025 11:52:51 +0000 (0:00:00.195) 0:00:06.229 *********** 2025-06-22 11:52:51.644000 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:51.645330 | orchestrator | 2025-06-22 11:52:51.646261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:51.646899 | orchestrator | Sunday 22 June 2025 11:52:51 +0000 (0:00:00.202) 0:00:06.431 *********** 2025-06-22 11:52:51.838205 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:51.838729 | orchestrator | 2025-06-22 11:52:51.840251 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:51.840285 | orchestrator | Sunday 22 June 2025 11:52:51 +0000 (0:00:00.193) 0:00:06.625 *********** 2025-06-22 11:52:52.032721 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:52.036222 | orchestrator | 2025-06-22 11:52:52.036465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:52.037024 | orchestrator | Sunday 22 June 2025 11:52:52 +0000 (0:00:00.193) 0:00:06.819 *********** 2025-06-22 11:52:53.074501 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-22 11:52:53.075017 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-22 11:52:53.075726 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-22 11:52:53.076430 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-22 11:52:53.077632 | orchestrator | 2025-06-22 11:52:53.078497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:53.078744 | orchestrator | Sunday 22 June 2025 11:52:53 +0000 (0:00:01.041) 0:00:07.861 *********** 2025-06-22 11:52:53.277786 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:53.278789 | orchestrator | 2025-06-22 11:52:53.280118 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:53.280703 | orchestrator | Sunday 22 June 2025 11:52:53 +0000 (0:00:00.204) 0:00:08.065 *********** 2025-06-22 11:52:53.471721 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:53.472495 | orchestrator | 2025-06-22 11:52:53.472920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:53.473417 | orchestrator | Sunday 22 June 2025 11:52:53 +0000 (0:00:00.193) 0:00:08.259 *********** 2025-06-22 11:52:53.676210 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:53.676339 | orchestrator | 2025-06-22 11:52:53.677010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:52:53.679666 | orchestrator | Sunday 22 June 2025 11:52:53 +0000 (0:00:00.204) 0:00:08.463 *********** 2025-06-22 11:52:53.870906 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:53.871896 | orchestrator | 2025-06-22 11:52:53.873012 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-22 11:52:53.874263 | orchestrator | Sunday 22 June 2025 11:52:53 +0000 (0:00:00.193) 0:00:08.657 *********** 2025-06-22 11:52:54.008874 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:54.010768 | orchestrator | 2025-06-22 11:52:54.011718 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-22 11:52:54.011759 | orchestrator | Sunday 22 June 2025 11:52:54 +0000 (0:00:00.136) 0:00:08.794 *********** 2025-06-22 11:52:54.195991 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'}}) 2025-06-22 11:52:54.196481 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0b51a6ec-8722-57c7-ad6b-56758d62ede6'}}) 2025-06-22 11:52:54.197662 | orchestrator | 2025-06-22 11:52:54.198120 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-22 11:52:54.199031 | orchestrator | Sunday 22 June 2025 11:52:54 +0000 (0:00:00.188) 0:00:08.983 *********** 2025-06-22 11:52:56.187709 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'}) 2025-06-22 11:52:56.187886 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'}) 2025-06-22 11:52:56.188644 | orchestrator | 2025-06-22 11:52:56.190642 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-22 11:52:56.193797 | orchestrator | Sunday 22 June 2025 11:52:56 +0000 (0:00:01.989) 0:00:10.973 *********** 2025-06-22 11:52:56.343159 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'})  2025-06-22 11:52:56.343429 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'})  2025-06-22 11:52:56.344858 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:56.347024 | orchestrator | 2025-06-22 11:52:56.347496 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-22 11:52:56.348715 | orchestrator | Sunday 22 June 2025 11:52:56 +0000 (0:00:00.156) 0:00:11.129 *********** 2025-06-22 11:52:57.765473 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'}) 2025-06-22 11:52:57.765664 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'}) 2025-06-22 11:52:57.766368 | orchestrator | 2025-06-22 11:52:57.767414 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-22 11:52:57.768501 | orchestrator | Sunday 22 June 2025 11:52:57 +0000 (0:00:01.421) 0:00:12.551 *********** 2025-06-22 11:52:57.912841 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'})  2025-06-22 11:52:57.913558 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'})  2025-06-22 11:52:57.913897 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:57.914694 | orchestrator | 2025-06-22 11:52:57.915619 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-22 11:52:57.916158 | orchestrator | Sunday 22 June 2025 11:52:57 +0000 (0:00:00.148) 0:00:12.699 *********** 2025-06-22 11:52:58.040959 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:58.042314 | orchestrator | 2025-06-22 11:52:58.043511 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-22 11:52:58.044698 | orchestrator | Sunday 22 June 2025 11:52:58 +0000 (0:00:00.127) 0:00:12.827 *********** 2025-06-22 11:52:58.374279 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'})  2025-06-22 11:52:58.375141 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'})  2025-06-22 11:52:58.375855 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:58.376383 | orchestrator | 2025-06-22 11:52:58.376988 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-22 11:52:58.377677 | orchestrator | Sunday 22 June 2025 11:52:58 +0000 (0:00:00.330) 0:00:13.158 *********** 2025-06-22 11:52:58.513048 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:58.513146 | orchestrator | 2025-06-22 11:52:58.513219 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-22 11:52:58.514505 | orchestrator | Sunday 22 June 2025 11:52:58 +0000 (0:00:00.139) 0:00:13.297 *********** 2025-06-22 11:52:58.658958 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'})  2025-06-22 11:52:58.659193 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'})  2025-06-22 11:52:58.660698 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:58.662079 | orchestrator | 2025-06-22 11:52:58.662791 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-22 11:52:58.663620 | orchestrator | Sunday 22 June 2025 11:52:58 +0000 (0:00:00.148) 0:00:13.445 *********** 2025-06-22 11:52:58.787522 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:58.788094 | orchestrator | 2025-06-22 11:52:58.788747 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-22 11:52:58.789128 | orchestrator | Sunday 22 June 2025 11:52:58 +0000 (0:00:00.130) 0:00:13.576 *********** 2025-06-22 11:52:58.943061 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'})  2025-06-22 11:52:58.944460 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'})  2025-06-22 11:52:58.944602 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:58.945621 | orchestrator | 2025-06-22 11:52:58.946938 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-22 11:52:58.948035 | orchestrator | Sunday 22 June 2025 11:52:58 +0000 (0:00:00.154) 0:00:13.730 *********** 2025-06-22 11:52:59.084105 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:52:59.084808 | orchestrator | 2025-06-22 11:52:59.085678 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-22 11:52:59.086513 | orchestrator | Sunday 22 June 2025 11:52:59 +0000 (0:00:00.141) 0:00:13.871 *********** 2025-06-22 11:52:59.233429 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'})  2025-06-22 11:52:59.234261 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'})  2025-06-22 11:52:59.235141 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:59.237244 | orchestrator | 2025-06-22 11:52:59.237288 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-22 11:52:59.237302 | orchestrator | Sunday 22 June 2025 11:52:59 +0000 (0:00:00.149) 0:00:14.020 *********** 2025-06-22 11:52:59.383959 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'})  2025-06-22 11:52:59.385481 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'})  2025-06-22 11:52:59.387716 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:59.388338 | orchestrator | 2025-06-22 11:52:59.389198 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-22 11:52:59.389769 | orchestrator | Sunday 22 June 2025 11:52:59 +0000 (0:00:00.147) 0:00:14.168 *********** 2025-06-22 11:52:59.526115 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'})  2025-06-22 11:52:59.526326 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'})  2025-06-22 11:52:59.527525 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:59.528690 | orchestrator | 2025-06-22 11:52:59.530167 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-22 11:52:59.531385 | orchestrator | Sunday 22 June 2025 11:52:59 +0000 (0:00:00.144) 0:00:14.313 *********** 2025-06-22 11:52:59.658806 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:59.660072 | orchestrator | 2025-06-22 11:52:59.660183 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-22 11:52:59.661186 | orchestrator | Sunday 22 June 2025 11:52:59 +0000 (0:00:00.133) 0:00:14.446 *********** 2025-06-22 11:52:59.780500 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:59.781324 | orchestrator | 2025-06-22 11:52:59.782117 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-22 11:52:59.783064 | orchestrator | Sunday 22 June 2025 11:52:59 +0000 (0:00:00.121) 0:00:14.568 *********** 2025-06-22 11:52:59.899768 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:52:59.900977 | orchestrator | 2025-06-22 11:52:59.902502 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-22 11:52:59.903352 | orchestrator | Sunday 22 June 2025 11:52:59 +0000 (0:00:00.118) 0:00:14.686 *********** 2025-06-22 11:53:00.225414 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 11:53:00.226293 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-22 11:53:00.228145 | orchestrator | } 2025-06-22 11:53:00.229862 | orchestrator | 2025-06-22 11:53:00.230657 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-22 11:53:00.231454 | orchestrator | Sunday 22 June 2025 11:53:00 +0000 (0:00:00.325) 0:00:15.012 *********** 2025-06-22 11:53:00.376119 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 11:53:00.376225 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-22 11:53:00.376879 | orchestrator | } 2025-06-22 11:53:00.377932 | orchestrator | 2025-06-22 11:53:00.378938 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-22 11:53:00.379711 | orchestrator | Sunday 22 June 2025 11:53:00 +0000 (0:00:00.148) 0:00:15.161 *********** 2025-06-22 11:53:00.514226 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 11:53:00.515138 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-22 11:53:00.516701 | orchestrator | } 2025-06-22 11:53:00.517635 | orchestrator | 2025-06-22 11:53:00.518439 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-22 11:53:00.519086 | orchestrator | Sunday 22 June 2025 11:53:00 +0000 (0:00:00.140) 0:00:15.301 *********** 2025-06-22 11:53:01.164203 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:53:01.164686 | orchestrator | 2025-06-22 11:53:01.166212 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-22 11:53:01.167928 | orchestrator | Sunday 22 June 2025 11:53:01 +0000 (0:00:00.650) 0:00:15.951 *********** 2025-06-22 11:53:01.679993 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:53:01.680481 | orchestrator | 2025-06-22 11:53:01.681277 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-22 11:53:01.682556 | orchestrator | Sunday 22 June 2025 11:53:01 +0000 (0:00:00.514) 0:00:16.466 *********** 2025-06-22 11:53:02.164835 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:53:02.165425 | orchestrator | 2025-06-22 11:53:02.166725 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-22 11:53:02.168146 | orchestrator | Sunday 22 June 2025 11:53:02 +0000 (0:00:00.485) 0:00:16.952 *********** 2025-06-22 11:53:02.309831 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:53:02.310391 | orchestrator | 2025-06-22 11:53:02.311073 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-22 11:53:02.312358 | orchestrator | Sunday 22 June 2025 11:53:02 +0000 (0:00:00.144) 0:00:17.096 *********** 2025-06-22 11:53:02.404550 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:02.405632 | orchestrator | 2025-06-22 11:53:02.406871 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-22 11:53:02.407655 | orchestrator | Sunday 22 June 2025 11:53:02 +0000 (0:00:00.093) 0:00:17.190 *********** 2025-06-22 11:53:02.504147 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:02.504330 | orchestrator | 2025-06-22 11:53:02.505411 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-22 11:53:02.506488 | orchestrator | Sunday 22 June 2025 11:53:02 +0000 (0:00:00.099) 0:00:17.290 *********** 2025-06-22 11:53:02.647771 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 11:53:02.647875 | orchestrator |  "vgs_report": { 2025-06-22 11:53:02.647892 | orchestrator |  "vg": [] 2025-06-22 11:53:02.648090 | orchestrator |  } 2025-06-22 11:53:02.648293 | orchestrator | } 2025-06-22 11:53:02.649069 | orchestrator | 2025-06-22 11:53:02.649621 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-22 11:53:02.650117 | orchestrator | Sunday 22 June 2025 11:53:02 +0000 (0:00:00.142) 0:00:17.433 *********** 2025-06-22 11:53:02.795519 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:02.796233 | orchestrator | 2025-06-22 11:53:02.798148 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-22 11:53:02.799048 | orchestrator | Sunday 22 June 2025 11:53:02 +0000 (0:00:00.149) 0:00:17.583 *********** 2025-06-22 11:53:02.926187 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:02.928246 | orchestrator | 2025-06-22 11:53:02.928301 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-22 11:53:02.929399 | orchestrator | Sunday 22 June 2025 11:53:02 +0000 (0:00:00.130) 0:00:17.713 *********** 2025-06-22 11:53:03.230826 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:03.231865 | orchestrator | 2025-06-22 11:53:03.232511 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-22 11:53:03.233095 | orchestrator | Sunday 22 June 2025 11:53:03 +0000 (0:00:00.303) 0:00:18.017 *********** 2025-06-22 11:53:03.364122 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:03.365096 | orchestrator | 2025-06-22 11:53:03.366990 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-22 11:53:03.367720 | orchestrator | Sunday 22 June 2025 11:53:03 +0000 (0:00:00.133) 0:00:18.151 *********** 2025-06-22 11:53:03.507457 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:03.507882 | orchestrator | 2025-06-22 11:53:03.510081 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-22 11:53:03.511084 | orchestrator | Sunday 22 June 2025 11:53:03 +0000 (0:00:00.142) 0:00:18.294 *********** 2025-06-22 11:53:03.640076 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:03.641526 | orchestrator | 2025-06-22 11:53:03.641913 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-22 11:53:03.642889 | orchestrator | Sunday 22 June 2025 11:53:03 +0000 (0:00:00.133) 0:00:18.428 *********** 2025-06-22 11:53:03.774676 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:03.774923 | orchestrator | 2025-06-22 11:53:03.776635 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-22 11:53:03.777533 | orchestrator | Sunday 22 June 2025 11:53:03 +0000 (0:00:00.132) 0:00:18.560 *********** 2025-06-22 11:53:03.904982 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:03.905507 | orchestrator | 2025-06-22 11:53:03.906833 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-22 11:53:03.908216 | orchestrator | Sunday 22 June 2025 11:53:03 +0000 (0:00:00.131) 0:00:18.692 *********** 2025-06-22 11:53:04.025896 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:04.026979 | orchestrator | 2025-06-22 11:53:04.027944 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-22 11:53:04.029032 | orchestrator | Sunday 22 June 2025 11:53:04 +0000 (0:00:00.120) 0:00:18.813 *********** 2025-06-22 11:53:04.164180 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:04.165413 | orchestrator | 2025-06-22 11:53:04.166346 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-22 11:53:04.167174 | orchestrator | Sunday 22 June 2025 11:53:04 +0000 (0:00:00.137) 0:00:18.951 *********** 2025-06-22 11:53:04.305179 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:04.306951 | orchestrator | 2025-06-22 11:53:04.308399 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-22 11:53:04.309281 | orchestrator | Sunday 22 June 2025 11:53:04 +0000 (0:00:00.140) 0:00:19.091 *********** 2025-06-22 11:53:04.441554 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:04.442775 | orchestrator | 2025-06-22 11:53:04.443627 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-22 11:53:04.444291 | orchestrator | Sunday 22 June 2025 11:53:04 +0000 (0:00:00.135) 0:00:19.227 *********** 2025-06-22 11:53:04.589488 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:04.589806 | orchestrator | 2025-06-22 11:53:04.591329 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-22 11:53:04.591403 | orchestrator | Sunday 22 June 2025 11:53:04 +0000 (0:00:00.146) 0:00:19.373 *********** 2025-06-22 11:53:04.719296 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:04.719822 | orchestrator | 2025-06-22 11:53:04.720849 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-22 11:53:04.721311 | orchestrator | Sunday 22 June 2025 11:53:04 +0000 (0:00:00.133) 0:00:19.506 *********** 2025-06-22 11:53:04.868410 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'})  2025-06-22 11:53:04.869172 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'})  2025-06-22 11:53:04.869942 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:04.871997 | orchestrator | 2025-06-22 11:53:04.872495 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-22 11:53:04.872649 | orchestrator | Sunday 22 June 2025 11:53:04 +0000 (0:00:00.148) 0:00:19.655 *********** 2025-06-22 11:53:05.215200 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'})  2025-06-22 11:53:05.215765 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'})  2025-06-22 11:53:05.216058 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:05.217311 | orchestrator | 2025-06-22 11:53:05.217950 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-22 11:53:05.218770 | orchestrator | Sunday 22 June 2025 11:53:05 +0000 (0:00:00.346) 0:00:20.002 *********** 2025-06-22 11:53:05.372087 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'})  2025-06-22 11:53:05.372767 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'})  2025-06-22 11:53:05.372837 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:05.373078 | orchestrator | 2025-06-22 11:53:05.375890 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-22 11:53:05.376992 | orchestrator | Sunday 22 June 2025 11:53:05 +0000 (0:00:00.157) 0:00:20.159 *********** 2025-06-22 11:53:05.529047 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'})  2025-06-22 11:53:05.530117 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'})  2025-06-22 11:53:05.530923 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:05.531826 | orchestrator | 2025-06-22 11:53:05.532766 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-22 11:53:05.533698 | orchestrator | Sunday 22 June 2025 11:53:05 +0000 (0:00:00.156) 0:00:20.316 *********** 2025-06-22 11:53:05.678940 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'})  2025-06-22 11:53:05.679676 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'})  2025-06-22 11:53:05.680487 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:05.681403 | orchestrator | 2025-06-22 11:53:05.682116 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-22 11:53:05.682878 | orchestrator | Sunday 22 June 2025 11:53:05 +0000 (0:00:00.149) 0:00:20.466 *********** 2025-06-22 11:53:05.830960 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'})  2025-06-22 11:53:05.831612 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'})  2025-06-22 11:53:05.832825 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:05.833723 | orchestrator | 2025-06-22 11:53:05.834638 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-22 11:53:05.835816 | orchestrator | Sunday 22 June 2025 11:53:05 +0000 (0:00:00.151) 0:00:20.618 *********** 2025-06-22 11:53:05.985463 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'})  2025-06-22 11:53:05.985865 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'})  2025-06-22 11:53:05.986952 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:05.987957 | orchestrator | 2025-06-22 11:53:05.989040 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-22 11:53:05.989705 | orchestrator | Sunday 22 June 2025 11:53:05 +0000 (0:00:00.154) 0:00:20.772 *********** 2025-06-22 11:53:06.129551 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'})  2025-06-22 11:53:06.130627 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'})  2025-06-22 11:53:06.131468 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:06.132499 | orchestrator | 2025-06-22 11:53:06.133468 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-22 11:53:06.134111 | orchestrator | Sunday 22 June 2025 11:53:06 +0000 (0:00:00.143) 0:00:20.916 *********** 2025-06-22 11:53:06.628394 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:53:06.628526 | orchestrator | 2025-06-22 11:53:06.629025 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-22 11:53:06.629592 | orchestrator | Sunday 22 June 2025 11:53:06 +0000 (0:00:00.496) 0:00:21.413 *********** 2025-06-22 11:53:07.123619 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:53:07.124053 | orchestrator | 2025-06-22 11:53:07.124968 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-22 11:53:07.125326 | orchestrator | Sunday 22 June 2025 11:53:07 +0000 (0:00:00.497) 0:00:21.911 *********** 2025-06-22 11:53:07.268814 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:53:07.269064 | orchestrator | 2025-06-22 11:53:07.269739 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-22 11:53:07.270323 | orchestrator | Sunday 22 June 2025 11:53:07 +0000 (0:00:00.145) 0:00:22.056 *********** 2025-06-22 11:53:07.431869 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'vg_name': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'}) 2025-06-22 11:53:07.431973 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'vg_name': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'}) 2025-06-22 11:53:07.433018 | orchestrator | 2025-06-22 11:53:07.433964 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-22 11:53:07.436332 | orchestrator | Sunday 22 June 2025 11:53:07 +0000 (0:00:00.162) 0:00:22.219 *********** 2025-06-22 11:53:07.579732 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'})  2025-06-22 11:53:07.580136 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'})  2025-06-22 11:53:07.581009 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:07.582110 | orchestrator | 2025-06-22 11:53:07.583091 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-22 11:53:07.583467 | orchestrator | Sunday 22 June 2025 11:53:07 +0000 (0:00:00.148) 0:00:22.367 *********** 2025-06-22 11:53:07.904328 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'})  2025-06-22 11:53:07.905198 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'})  2025-06-22 11:53:07.906368 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:07.907311 | orchestrator | 2025-06-22 11:53:07.908101 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-22 11:53:07.909801 | orchestrator | Sunday 22 June 2025 11:53:07 +0000 (0:00:00.325) 0:00:22.692 *********** 2025-06-22 11:53:08.064477 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'})  2025-06-22 11:53:08.064976 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'})  2025-06-22 11:53:08.065431 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:53:08.066111 | orchestrator | 2025-06-22 11:53:08.066616 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-22 11:53:08.067368 | orchestrator | Sunday 22 June 2025 11:53:08 +0000 (0:00:00.159) 0:00:22.852 *********** 2025-06-22 11:53:08.347627 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 11:53:08.348908 | orchestrator |  "lvm_report": { 2025-06-22 11:53:08.350268 | orchestrator |  "lv": [ 2025-06-22 11:53:08.352172 | orchestrator |  { 2025-06-22 11:53:08.353893 | orchestrator |  "lv_name": "osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6", 2025-06-22 11:53:08.355527 | orchestrator |  "vg_name": "ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6" 2025-06-22 11:53:08.358237 | orchestrator |  }, 2025-06-22 11:53:08.358702 | orchestrator |  { 2025-06-22 11:53:08.359477 | orchestrator |  "lv_name": "osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f", 2025-06-22 11:53:08.361126 | orchestrator |  "vg_name": "ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f" 2025-06-22 11:53:08.362082 | orchestrator |  } 2025-06-22 11:53:08.363198 | orchestrator |  ], 2025-06-22 11:53:08.363840 | orchestrator |  "pv": [ 2025-06-22 11:53:08.364967 | orchestrator |  { 2025-06-22 11:53:08.365405 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-22 11:53:08.366107 | orchestrator |  "vg_name": "ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f" 2025-06-22 11:53:08.367021 | orchestrator |  }, 2025-06-22 11:53:08.368194 | orchestrator |  { 2025-06-22 11:53:08.368506 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-22 11:53:08.369093 | orchestrator |  "vg_name": "ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6" 2025-06-22 11:53:08.369849 | orchestrator |  } 2025-06-22 11:53:08.370421 | orchestrator |  ] 2025-06-22 11:53:08.371019 | orchestrator |  } 2025-06-22 11:53:08.371768 | orchestrator | } 2025-06-22 11:53:08.372405 | orchestrator | 2025-06-22 11:53:08.372956 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-22 11:53:08.373361 | orchestrator | 2025-06-22 11:53:08.373995 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 11:53:08.374205 | orchestrator | Sunday 22 June 2025 11:53:08 +0000 (0:00:00.282) 0:00:23.134 *********** 2025-06-22 11:53:08.621915 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-22 11:53:08.622148 | orchestrator | 2025-06-22 11:53:08.623349 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 11:53:08.624547 | orchestrator | Sunday 22 June 2025 11:53:08 +0000 (0:00:00.272) 0:00:23.407 *********** 2025-06-22 11:53:08.845114 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:53:08.845830 | orchestrator | 2025-06-22 11:53:08.847753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:08.849176 | orchestrator | Sunday 22 June 2025 11:53:08 +0000 (0:00:00.224) 0:00:23.632 *********** 2025-06-22 11:53:09.235032 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-22 11:53:09.237077 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-22 11:53:09.237150 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-22 11:53:09.238078 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-22 11:53:09.240373 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-22 11:53:09.240403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-22 11:53:09.241543 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-22 11:53:09.242678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-22 11:53:09.244413 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-22 11:53:09.245340 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-22 11:53:09.246254 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-22 11:53:09.247175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-22 11:53:09.247839 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-22 11:53:09.248598 | orchestrator | 2025-06-22 11:53:09.249510 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:09.250148 | orchestrator | Sunday 22 June 2025 11:53:09 +0000 (0:00:00.390) 0:00:24.022 *********** 2025-06-22 11:53:09.433676 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:09.433792 | orchestrator | 2025-06-22 11:53:09.433858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:09.433892 | orchestrator | Sunday 22 June 2025 11:53:09 +0000 (0:00:00.196) 0:00:24.219 *********** 2025-06-22 11:53:09.623012 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:09.623111 | orchestrator | 2025-06-22 11:53:09.623763 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:09.624635 | orchestrator | Sunday 22 June 2025 11:53:09 +0000 (0:00:00.191) 0:00:24.410 *********** 2025-06-22 11:53:09.818450 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:09.819555 | orchestrator | 2025-06-22 11:53:09.820541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:09.821242 | orchestrator | Sunday 22 June 2025 11:53:09 +0000 (0:00:00.196) 0:00:24.606 *********** 2025-06-22 11:53:10.439499 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:10.441025 | orchestrator | 2025-06-22 11:53:10.442246 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:10.443000 | orchestrator | Sunday 22 June 2025 11:53:10 +0000 (0:00:00.620) 0:00:25.226 *********** 2025-06-22 11:53:10.641502 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:10.641680 | orchestrator | 2025-06-22 11:53:10.642495 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:10.644690 | orchestrator | Sunday 22 June 2025 11:53:10 +0000 (0:00:00.200) 0:00:25.427 *********** 2025-06-22 11:53:10.830866 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:10.831342 | orchestrator | 2025-06-22 11:53:10.832295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:10.833254 | orchestrator | Sunday 22 June 2025 11:53:10 +0000 (0:00:00.192) 0:00:25.619 *********** 2025-06-22 11:53:11.016881 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:11.017241 | orchestrator | 2025-06-22 11:53:11.018154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:11.019350 | orchestrator | Sunday 22 June 2025 11:53:11 +0000 (0:00:00.185) 0:00:25.804 *********** 2025-06-22 11:53:11.199351 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:11.199806 | orchestrator | 2025-06-22 11:53:11.200804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:11.201990 | orchestrator | Sunday 22 June 2025 11:53:11 +0000 (0:00:00.182) 0:00:25.986 *********** 2025-06-22 11:53:11.623526 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48) 2025-06-22 11:53:11.623753 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48) 2025-06-22 11:53:11.624864 | orchestrator | 2025-06-22 11:53:11.626139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:11.626758 | orchestrator | Sunday 22 June 2025 11:53:11 +0000 (0:00:00.421) 0:00:26.407 *********** 2025-06-22 11:53:12.008860 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_95ca9be4-ae4c-4603-a11a-c98b5f55b273) 2025-06-22 11:53:12.009918 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_95ca9be4-ae4c-4603-a11a-c98b5f55b273) 2025-06-22 11:53:12.010315 | orchestrator | 2025-06-22 11:53:12.011416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:12.012066 | orchestrator | Sunday 22 June 2025 11:53:12 +0000 (0:00:00.388) 0:00:26.796 *********** 2025-06-22 11:53:12.399425 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_899f0377-b87c-421a-9d44-3bd393f5c125) 2025-06-22 11:53:12.399731 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_899f0377-b87c-421a-9d44-3bd393f5c125) 2025-06-22 11:53:12.400467 | orchestrator | 2025-06-22 11:53:12.400986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:12.403210 | orchestrator | Sunday 22 June 2025 11:53:12 +0000 (0:00:00.390) 0:00:27.186 *********** 2025-06-22 11:53:12.802537 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_060f7999-6812-4095-99a7-aa228581a5cf) 2025-06-22 11:53:12.802717 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_060f7999-6812-4095-99a7-aa228581a5cf) 2025-06-22 11:53:12.803119 | orchestrator | 2025-06-22 11:53:12.803597 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:12.804390 | orchestrator | Sunday 22 June 2025 11:53:12 +0000 (0:00:00.402) 0:00:27.589 *********** 2025-06-22 11:53:13.129286 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 11:53:13.129383 | orchestrator | 2025-06-22 11:53:13.129770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:13.130204 | orchestrator | Sunday 22 June 2025 11:53:13 +0000 (0:00:00.327) 0:00:27.917 *********** 2025-06-22 11:53:13.712551 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-22 11:53:13.712795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-22 11:53:13.712816 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-22 11:53:13.713524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-22 11:53:13.714093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-22 11:53:13.716076 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-22 11:53:13.716553 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-22 11:53:13.717110 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-22 11:53:13.717632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-22 11:53:13.718473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-22 11:53:13.718735 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-22 11:53:13.719170 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-22 11:53:13.719749 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-22 11:53:13.719905 | orchestrator | 2025-06-22 11:53:13.720308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:13.721029 | orchestrator | Sunday 22 June 2025 11:53:13 +0000 (0:00:00.581) 0:00:28.498 *********** 2025-06-22 11:53:13.909525 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:13.909702 | orchestrator | 2025-06-22 11:53:13.909878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:13.910136 | orchestrator | Sunday 22 June 2025 11:53:13 +0000 (0:00:00.198) 0:00:28.697 *********** 2025-06-22 11:53:14.107262 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:14.108017 | orchestrator | 2025-06-22 11:53:14.108694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:14.109285 | orchestrator | Sunday 22 June 2025 11:53:14 +0000 (0:00:00.197) 0:00:28.894 *********** 2025-06-22 11:53:14.307382 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:14.307710 | orchestrator | 2025-06-22 11:53:14.308610 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:14.309335 | orchestrator | Sunday 22 June 2025 11:53:14 +0000 (0:00:00.199) 0:00:29.094 *********** 2025-06-22 11:53:14.510293 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:14.510470 | orchestrator | 2025-06-22 11:53:14.510619 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:14.511381 | orchestrator | Sunday 22 June 2025 11:53:14 +0000 (0:00:00.202) 0:00:29.297 *********** 2025-06-22 11:53:14.708192 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:14.709559 | orchestrator | 2025-06-22 11:53:14.709757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:14.710804 | orchestrator | Sunday 22 June 2025 11:53:14 +0000 (0:00:00.196) 0:00:29.493 *********** 2025-06-22 11:53:14.904759 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:14.905424 | orchestrator | 2025-06-22 11:53:14.906106 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:14.906760 | orchestrator | Sunday 22 June 2025 11:53:14 +0000 (0:00:00.198) 0:00:29.692 *********** 2025-06-22 11:53:15.090827 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:15.091140 | orchestrator | 2025-06-22 11:53:15.091892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:15.092523 | orchestrator | Sunday 22 June 2025 11:53:15 +0000 (0:00:00.185) 0:00:29.878 *********** 2025-06-22 11:53:15.293694 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:15.293880 | orchestrator | 2025-06-22 11:53:15.293996 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:15.294937 | orchestrator | Sunday 22 June 2025 11:53:15 +0000 (0:00:00.201) 0:00:30.080 *********** 2025-06-22 11:53:16.135375 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-22 11:53:16.135479 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-22 11:53:16.136261 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-22 11:53:16.136705 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-22 11:53:16.137458 | orchestrator | 2025-06-22 11:53:16.138314 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:16.138909 | orchestrator | Sunday 22 June 2025 11:53:16 +0000 (0:00:00.840) 0:00:30.921 *********** 2025-06-22 11:53:16.326268 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:16.326879 | orchestrator | 2025-06-22 11:53:16.327719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:16.328403 | orchestrator | Sunday 22 June 2025 11:53:16 +0000 (0:00:00.192) 0:00:31.113 *********** 2025-06-22 11:53:16.517769 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:16.518458 | orchestrator | 2025-06-22 11:53:16.519897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:16.519926 | orchestrator | Sunday 22 June 2025 11:53:16 +0000 (0:00:00.190) 0:00:31.303 *********** 2025-06-22 11:53:17.144365 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:17.145265 | orchestrator | 2025-06-22 11:53:17.146953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:17.147025 | orchestrator | Sunday 22 June 2025 11:53:17 +0000 (0:00:00.626) 0:00:31.930 *********** 2025-06-22 11:53:17.341702 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:17.341912 | orchestrator | 2025-06-22 11:53:17.343078 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-22 11:53:17.343933 | orchestrator | Sunday 22 June 2025 11:53:17 +0000 (0:00:00.198) 0:00:32.129 *********** 2025-06-22 11:53:17.478497 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:17.478972 | orchestrator | 2025-06-22 11:53:17.479724 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-22 11:53:17.480716 | orchestrator | Sunday 22 June 2025 11:53:17 +0000 (0:00:00.137) 0:00:32.266 *********** 2025-06-22 11:53:17.656769 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd90edff2-979c-5e5e-98e2-f02394d35fb4'}}) 2025-06-22 11:53:17.657504 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9de1692c-afc0-5cdb-8a59-e564d6a096fc'}}) 2025-06-22 11:53:17.658208 | orchestrator | 2025-06-22 11:53:17.658967 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-22 11:53:17.659794 | orchestrator | Sunday 22 June 2025 11:53:17 +0000 (0:00:00.175) 0:00:32.442 *********** 2025-06-22 11:53:19.401975 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'}) 2025-06-22 11:53:19.402518 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'}) 2025-06-22 11:53:19.403623 | orchestrator | 2025-06-22 11:53:19.404683 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-22 11:53:19.405838 | orchestrator | Sunday 22 June 2025 11:53:19 +0000 (0:00:01.744) 0:00:34.187 *********** 2025-06-22 11:53:19.563873 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'})  2025-06-22 11:53:19.564390 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'})  2025-06-22 11:53:19.564801 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:19.565830 | orchestrator | 2025-06-22 11:53:19.566654 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-22 11:53:19.567446 | orchestrator | Sunday 22 June 2025 11:53:19 +0000 (0:00:00.164) 0:00:34.351 *********** 2025-06-22 11:53:20.849620 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'}) 2025-06-22 11:53:20.850932 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'}) 2025-06-22 11:53:20.853398 | orchestrator | 2025-06-22 11:53:20.853908 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-22 11:53:20.854819 | orchestrator | Sunday 22 June 2025 11:53:20 +0000 (0:00:01.284) 0:00:35.635 *********** 2025-06-22 11:53:20.992950 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'})  2025-06-22 11:53:20.994104 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'})  2025-06-22 11:53:20.995228 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:20.996348 | orchestrator | 2025-06-22 11:53:20.998664 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-22 11:53:20.999276 | orchestrator | Sunday 22 June 2025 11:53:20 +0000 (0:00:00.144) 0:00:35.780 *********** 2025-06-22 11:53:21.122980 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:21.123089 | orchestrator | 2025-06-22 11:53:21.126416 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-22 11:53:21.126517 | orchestrator | Sunday 22 June 2025 11:53:21 +0000 (0:00:00.128) 0:00:35.909 *********** 2025-06-22 11:53:21.265002 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'})  2025-06-22 11:53:21.265329 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'})  2025-06-22 11:53:21.265838 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:21.266962 | orchestrator | 2025-06-22 11:53:21.267703 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-22 11:53:21.269712 | orchestrator | Sunday 22 June 2025 11:53:21 +0000 (0:00:00.143) 0:00:36.052 *********** 2025-06-22 11:53:21.387464 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:21.388664 | orchestrator | 2025-06-22 11:53:21.389172 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-22 11:53:21.391865 | orchestrator | Sunday 22 June 2025 11:53:21 +0000 (0:00:00.121) 0:00:36.174 *********** 2025-06-22 11:53:21.558226 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'})  2025-06-22 11:53:21.558326 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'})  2025-06-22 11:53:21.558341 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:21.558354 | orchestrator | 2025-06-22 11:53:21.558398 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-22 11:53:21.558411 | orchestrator | Sunday 22 June 2025 11:53:21 +0000 (0:00:00.165) 0:00:36.339 *********** 2025-06-22 11:53:21.873510 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:21.873953 | orchestrator | 2025-06-22 11:53:21.875260 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-22 11:53:21.877194 | orchestrator | Sunday 22 June 2025 11:53:21 +0000 (0:00:00.320) 0:00:36.660 *********** 2025-06-22 11:53:22.023134 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'})  2025-06-22 11:53:22.023954 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'})  2025-06-22 11:53:22.024999 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:22.025678 | orchestrator | 2025-06-22 11:53:22.026366 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-22 11:53:22.026874 | orchestrator | Sunday 22 June 2025 11:53:22 +0000 (0:00:00.149) 0:00:36.810 *********** 2025-06-22 11:53:22.175735 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:53:22.177495 | orchestrator | 2025-06-22 11:53:22.177523 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-22 11:53:22.177535 | orchestrator | Sunday 22 June 2025 11:53:22 +0000 (0:00:00.152) 0:00:36.962 *********** 2025-06-22 11:53:22.328110 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'})  2025-06-22 11:53:22.328915 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'})  2025-06-22 11:53:22.330061 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:22.331940 | orchestrator | 2025-06-22 11:53:22.332970 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-22 11:53:22.333719 | orchestrator | Sunday 22 June 2025 11:53:22 +0000 (0:00:00.152) 0:00:37.114 *********** 2025-06-22 11:53:22.487052 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'})  2025-06-22 11:53:22.487812 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'})  2025-06-22 11:53:22.489029 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:22.490085 | orchestrator | 2025-06-22 11:53:22.491433 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-22 11:53:22.492246 | orchestrator | Sunday 22 June 2025 11:53:22 +0000 (0:00:00.159) 0:00:37.274 *********** 2025-06-22 11:53:22.638715 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'})  2025-06-22 11:53:22.639159 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'})  2025-06-22 11:53:22.642530 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:22.643760 | orchestrator | 2025-06-22 11:53:22.645047 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-22 11:53:22.649941 | orchestrator | Sunday 22 June 2025 11:53:22 +0000 (0:00:00.151) 0:00:37.425 *********** 2025-06-22 11:53:22.791476 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:22.791753 | orchestrator | 2025-06-22 11:53:22.791902 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-22 11:53:22.792725 | orchestrator | Sunday 22 June 2025 11:53:22 +0000 (0:00:00.153) 0:00:37.579 *********** 2025-06-22 11:53:22.920922 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:22.922056 | orchestrator | 2025-06-22 11:53:22.922584 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-22 11:53:22.925201 | orchestrator | Sunday 22 June 2025 11:53:22 +0000 (0:00:00.129) 0:00:37.708 *********** 2025-06-22 11:53:23.069246 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:23.069513 | orchestrator | 2025-06-22 11:53:23.070357 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-22 11:53:23.071358 | orchestrator | Sunday 22 June 2025 11:53:23 +0000 (0:00:00.147) 0:00:37.855 *********** 2025-06-22 11:53:23.199478 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 11:53:23.199783 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-22 11:53:23.200719 | orchestrator | } 2025-06-22 11:53:23.201615 | orchestrator | 2025-06-22 11:53:23.203489 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-22 11:53:23.203514 | orchestrator | Sunday 22 June 2025 11:53:23 +0000 (0:00:00.131) 0:00:37.987 *********** 2025-06-22 11:53:23.344021 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 11:53:23.344546 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-22 11:53:23.345373 | orchestrator | } 2025-06-22 11:53:23.346216 | orchestrator | 2025-06-22 11:53:23.347265 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-22 11:53:23.347973 | orchestrator | Sunday 22 June 2025 11:53:23 +0000 (0:00:00.142) 0:00:38.130 *********** 2025-06-22 11:53:23.505109 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 11:53:23.508241 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-22 11:53:23.509884 | orchestrator | } 2025-06-22 11:53:23.510499 | orchestrator | 2025-06-22 11:53:23.511234 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-22 11:53:23.511783 | orchestrator | Sunday 22 June 2025 11:53:23 +0000 (0:00:00.160) 0:00:38.290 *********** 2025-06-22 11:53:24.227335 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:53:24.227546 | orchestrator | 2025-06-22 11:53:24.227630 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-22 11:53:24.227900 | orchestrator | Sunday 22 June 2025 11:53:24 +0000 (0:00:00.723) 0:00:39.014 *********** 2025-06-22 11:53:24.744399 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:53:24.745106 | orchestrator | 2025-06-22 11:53:24.746134 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-22 11:53:24.746934 | orchestrator | Sunday 22 June 2025 11:53:24 +0000 (0:00:00.514) 0:00:39.529 *********** 2025-06-22 11:53:25.262743 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:53:25.263431 | orchestrator | 2025-06-22 11:53:25.263986 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-22 11:53:25.265387 | orchestrator | Sunday 22 June 2025 11:53:25 +0000 (0:00:00.519) 0:00:40.049 *********** 2025-06-22 11:53:25.406693 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:53:25.407434 | orchestrator | 2025-06-22 11:53:25.408153 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-22 11:53:25.410154 | orchestrator | Sunday 22 June 2025 11:53:25 +0000 (0:00:00.145) 0:00:40.194 *********** 2025-06-22 11:53:25.526322 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:25.526480 | orchestrator | 2025-06-22 11:53:25.526745 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-22 11:53:25.527059 | orchestrator | Sunday 22 June 2025 11:53:25 +0000 (0:00:00.119) 0:00:40.314 *********** 2025-06-22 11:53:25.631485 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:25.631735 | orchestrator | 2025-06-22 11:53:25.632299 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-22 11:53:25.633069 | orchestrator | Sunday 22 June 2025 11:53:25 +0000 (0:00:00.105) 0:00:40.419 *********** 2025-06-22 11:53:25.762952 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 11:53:25.763240 | orchestrator |  "vgs_report": { 2025-06-22 11:53:25.763339 | orchestrator |  "vg": [] 2025-06-22 11:53:25.763671 | orchestrator |  } 2025-06-22 11:53:25.765269 | orchestrator | } 2025-06-22 11:53:25.766815 | orchestrator | 2025-06-22 11:53:25.767114 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-22 11:53:25.767615 | orchestrator | Sunday 22 June 2025 11:53:25 +0000 (0:00:00.131) 0:00:40.551 *********** 2025-06-22 11:53:25.890512 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:25.891352 | orchestrator | 2025-06-22 11:53:25.892278 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-22 11:53:25.894170 | orchestrator | Sunday 22 June 2025 11:53:25 +0000 (0:00:00.123) 0:00:40.674 *********** 2025-06-22 11:53:26.013192 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:26.013294 | orchestrator | 2025-06-22 11:53:26.013481 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-22 11:53:26.014068 | orchestrator | Sunday 22 June 2025 11:53:26 +0000 (0:00:00.121) 0:00:40.796 *********** 2025-06-22 11:53:26.138968 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:26.139146 | orchestrator | 2025-06-22 11:53:26.139686 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-22 11:53:26.140555 | orchestrator | Sunday 22 June 2025 11:53:26 +0000 (0:00:00.130) 0:00:40.926 *********** 2025-06-22 11:53:26.276662 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:26.277330 | orchestrator | 2025-06-22 11:53:26.279252 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-22 11:53:26.280489 | orchestrator | Sunday 22 June 2025 11:53:26 +0000 (0:00:00.136) 0:00:41.063 *********** 2025-06-22 11:53:26.412639 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:26.412838 | orchestrator | 2025-06-22 11:53:26.413231 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-22 11:53:26.414139 | orchestrator | Sunday 22 June 2025 11:53:26 +0000 (0:00:00.137) 0:00:41.200 *********** 2025-06-22 11:53:26.730967 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:26.731178 | orchestrator | 2025-06-22 11:53:26.732928 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-22 11:53:26.733309 | orchestrator | Sunday 22 June 2025 11:53:26 +0000 (0:00:00.316) 0:00:41.517 *********** 2025-06-22 11:53:26.873232 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:26.873380 | orchestrator | 2025-06-22 11:53:26.874210 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-22 11:53:26.875080 | orchestrator | Sunday 22 June 2025 11:53:26 +0000 (0:00:00.142) 0:00:41.660 *********** 2025-06-22 11:53:27.004410 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:27.005059 | orchestrator | 2025-06-22 11:53:27.007822 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-22 11:53:27.009056 | orchestrator | Sunday 22 June 2025 11:53:26 +0000 (0:00:00.131) 0:00:41.791 *********** 2025-06-22 11:53:27.131956 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:27.132790 | orchestrator | 2025-06-22 11:53:27.133427 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-22 11:53:27.134656 | orchestrator | Sunday 22 June 2025 11:53:27 +0000 (0:00:00.127) 0:00:41.919 *********** 2025-06-22 11:53:27.257844 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:27.258902 | orchestrator | 2025-06-22 11:53:27.259670 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-22 11:53:27.261010 | orchestrator | Sunday 22 June 2025 11:53:27 +0000 (0:00:00.126) 0:00:42.045 *********** 2025-06-22 11:53:27.381871 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:27.382919 | orchestrator | 2025-06-22 11:53:27.383835 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-22 11:53:27.384819 | orchestrator | Sunday 22 June 2025 11:53:27 +0000 (0:00:00.123) 0:00:42.169 *********** 2025-06-22 11:53:27.514841 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:27.515463 | orchestrator | 2025-06-22 11:53:27.517104 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-22 11:53:27.517998 | orchestrator | Sunday 22 June 2025 11:53:27 +0000 (0:00:00.133) 0:00:42.302 *********** 2025-06-22 11:53:27.643755 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:27.644696 | orchestrator | 2025-06-22 11:53:27.645458 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-22 11:53:27.646835 | orchestrator | Sunday 22 June 2025 11:53:27 +0000 (0:00:00.128) 0:00:42.431 *********** 2025-06-22 11:53:27.782819 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:27.783983 | orchestrator | 2025-06-22 11:53:27.784949 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-22 11:53:27.786078 | orchestrator | Sunday 22 June 2025 11:53:27 +0000 (0:00:00.139) 0:00:42.570 *********** 2025-06-22 11:53:27.928917 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'})  2025-06-22 11:53:27.930555 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'})  2025-06-22 11:53:27.932199 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:27.933386 | orchestrator | 2025-06-22 11:53:27.934072 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-22 11:53:27.935013 | orchestrator | Sunday 22 June 2025 11:53:27 +0000 (0:00:00.145) 0:00:42.716 *********** 2025-06-22 11:53:28.078144 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'})  2025-06-22 11:53:28.078707 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'})  2025-06-22 11:53:28.080908 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:28.080952 | orchestrator | 2025-06-22 11:53:28.081031 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-22 11:53:28.081643 | orchestrator | Sunday 22 June 2025 11:53:28 +0000 (0:00:00.148) 0:00:42.864 *********** 2025-06-22 11:53:28.229704 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'})  2025-06-22 11:53:28.232094 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'})  2025-06-22 11:53:28.232848 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:28.233705 | orchestrator | 2025-06-22 11:53:28.234488 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-22 11:53:28.235628 | orchestrator | Sunday 22 June 2025 11:53:28 +0000 (0:00:00.153) 0:00:43.017 *********** 2025-06-22 11:53:28.594270 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'})  2025-06-22 11:53:28.595878 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'})  2025-06-22 11:53:28.597834 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:28.597862 | orchestrator | 2025-06-22 11:53:28.598524 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-22 11:53:28.599384 | orchestrator | Sunday 22 June 2025 11:53:28 +0000 (0:00:00.364) 0:00:43.381 *********** 2025-06-22 11:53:28.754013 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'})  2025-06-22 11:53:28.755466 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'})  2025-06-22 11:53:28.756102 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:28.759372 | orchestrator | 2025-06-22 11:53:28.759445 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-22 11:53:28.759460 | orchestrator | Sunday 22 June 2025 11:53:28 +0000 (0:00:00.157) 0:00:43.539 *********** 2025-06-22 11:53:28.895760 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'})  2025-06-22 11:53:28.896811 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'})  2025-06-22 11:53:28.897387 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:28.899133 | orchestrator | 2025-06-22 11:53:28.899602 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-22 11:53:28.900297 | orchestrator | Sunday 22 June 2025 11:53:28 +0000 (0:00:00.144) 0:00:43.683 *********** 2025-06-22 11:53:29.049850 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'})  2025-06-22 11:53:29.050360 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'})  2025-06-22 11:53:29.051193 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:29.052800 | orchestrator | 2025-06-22 11:53:29.054135 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-22 11:53:29.054667 | orchestrator | Sunday 22 June 2025 11:53:29 +0000 (0:00:00.152) 0:00:43.835 *********** 2025-06-22 11:53:29.187102 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'})  2025-06-22 11:53:29.187826 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'})  2025-06-22 11:53:29.188267 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:29.189170 | orchestrator | 2025-06-22 11:53:29.190088 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-22 11:53:29.190646 | orchestrator | Sunday 22 June 2025 11:53:29 +0000 (0:00:00.138) 0:00:43.974 *********** 2025-06-22 11:53:29.688159 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:53:29.688771 | orchestrator | 2025-06-22 11:53:29.689870 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-22 11:53:29.691066 | orchestrator | Sunday 22 June 2025 11:53:29 +0000 (0:00:00.501) 0:00:44.475 *********** 2025-06-22 11:53:30.199374 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:53:30.199645 | orchestrator | 2025-06-22 11:53:30.201096 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-22 11:53:30.202314 | orchestrator | Sunday 22 June 2025 11:53:30 +0000 (0:00:00.508) 0:00:44.983 *********** 2025-06-22 11:53:30.344214 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:53:30.344364 | orchestrator | 2025-06-22 11:53:30.345208 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-22 11:53:30.346244 | orchestrator | Sunday 22 June 2025 11:53:30 +0000 (0:00:00.148) 0:00:45.131 *********** 2025-06-22 11:53:30.502876 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'vg_name': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'}) 2025-06-22 11:53:30.503681 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'vg_name': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'}) 2025-06-22 11:53:30.504954 | orchestrator | 2025-06-22 11:53:30.505929 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-22 11:53:30.507208 | orchestrator | Sunday 22 June 2025 11:53:30 +0000 (0:00:00.158) 0:00:45.290 *********** 2025-06-22 11:53:30.650172 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'})  2025-06-22 11:53:30.650656 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'})  2025-06-22 11:53:30.652093 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:30.653044 | orchestrator | 2025-06-22 11:53:30.654075 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-22 11:53:30.655531 | orchestrator | Sunday 22 June 2025 11:53:30 +0000 (0:00:00.147) 0:00:45.437 *********** 2025-06-22 11:53:30.790993 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'})  2025-06-22 11:53:30.791085 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'})  2025-06-22 11:53:30.792342 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:30.793613 | orchestrator | 2025-06-22 11:53:30.794083 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-22 11:53:30.794655 | orchestrator | Sunday 22 June 2025 11:53:30 +0000 (0:00:00.140) 0:00:45.577 *********** 2025-06-22 11:53:30.934864 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'})  2025-06-22 11:53:30.936184 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'})  2025-06-22 11:53:30.936493 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:53:30.939038 | orchestrator | 2025-06-22 11:53:30.939256 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-22 11:53:30.939777 | orchestrator | Sunday 22 June 2025 11:53:30 +0000 (0:00:00.144) 0:00:45.722 *********** 2025-06-22 11:53:31.424910 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 11:53:31.425076 | orchestrator |  "lvm_report": { 2025-06-22 11:53:31.426361 | orchestrator |  "lv": [ 2025-06-22 11:53:31.426861 | orchestrator |  { 2025-06-22 11:53:31.427781 | orchestrator |  "lv_name": "osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc", 2025-06-22 11:53:31.428484 | orchestrator |  "vg_name": "ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc" 2025-06-22 11:53:31.429507 | orchestrator |  }, 2025-06-22 11:53:31.429858 | orchestrator |  { 2025-06-22 11:53:31.431111 | orchestrator |  "lv_name": "osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4", 2025-06-22 11:53:31.431921 | orchestrator |  "vg_name": "ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4" 2025-06-22 11:53:31.432531 | orchestrator |  } 2025-06-22 11:53:31.433494 | orchestrator |  ], 2025-06-22 11:53:31.433810 | orchestrator |  "pv": [ 2025-06-22 11:53:31.434314 | orchestrator |  { 2025-06-22 11:53:31.435009 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-22 11:53:31.435633 | orchestrator |  "vg_name": "ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4" 2025-06-22 11:53:31.436273 | orchestrator |  }, 2025-06-22 11:53:31.436522 | orchestrator |  { 2025-06-22 11:53:31.436946 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-22 11:53:31.438535 | orchestrator |  "vg_name": "ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc" 2025-06-22 11:53:31.439227 | orchestrator |  } 2025-06-22 11:53:31.439523 | orchestrator |  ] 2025-06-22 11:53:31.440136 | orchestrator |  } 2025-06-22 11:53:31.440985 | orchestrator | } 2025-06-22 11:53:31.441249 | orchestrator | 2025-06-22 11:53:31.441784 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-22 11:53:31.442322 | orchestrator | 2025-06-22 11:53:31.442856 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 11:53:31.443502 | orchestrator | Sunday 22 June 2025 11:53:31 +0000 (0:00:00.488) 0:00:46.210 *********** 2025-06-22 11:53:31.667225 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-22 11:53:31.667933 | orchestrator | 2025-06-22 11:53:31.668153 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 11:53:31.672154 | orchestrator | Sunday 22 June 2025 11:53:31 +0000 (0:00:00.243) 0:00:46.454 *********** 2025-06-22 11:53:31.884973 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:53:31.885767 | orchestrator | 2025-06-22 11:53:31.886873 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:31.887418 | orchestrator | Sunday 22 June 2025 11:53:31 +0000 (0:00:00.217) 0:00:46.672 *********** 2025-06-22 11:53:32.294421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-22 11:53:32.295496 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-22 11:53:32.296103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-22 11:53:32.296791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-22 11:53:32.297160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-22 11:53:32.298124 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-22 11:53:32.298990 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-22 11:53:32.299513 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-22 11:53:32.300179 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-22 11:53:32.300812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-22 11:53:32.301359 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-22 11:53:32.302280 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-22 11:53:32.302628 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-22 11:53:32.303460 | orchestrator | 2025-06-22 11:53:32.304681 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:32.305716 | orchestrator | Sunday 22 June 2025 11:53:32 +0000 (0:00:00.407) 0:00:47.080 *********** 2025-06-22 11:53:32.489353 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:32.489444 | orchestrator | 2025-06-22 11:53:32.489929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:32.490695 | orchestrator | Sunday 22 June 2025 11:53:32 +0000 (0:00:00.196) 0:00:47.276 *********** 2025-06-22 11:53:32.688453 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:32.689242 | orchestrator | 2025-06-22 11:53:32.689435 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:32.690233 | orchestrator | Sunday 22 June 2025 11:53:32 +0000 (0:00:00.198) 0:00:47.474 *********** 2025-06-22 11:53:32.895891 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:32.897184 | orchestrator | 2025-06-22 11:53:32.897745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:32.898949 | orchestrator | Sunday 22 June 2025 11:53:32 +0000 (0:00:00.208) 0:00:47.683 *********** 2025-06-22 11:53:33.095992 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:33.096995 | orchestrator | 2025-06-22 11:53:33.097680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:33.098951 | orchestrator | Sunday 22 June 2025 11:53:33 +0000 (0:00:00.199) 0:00:47.883 *********** 2025-06-22 11:53:33.288123 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:33.288303 | orchestrator | 2025-06-22 11:53:33.290313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:33.290750 | orchestrator | Sunday 22 June 2025 11:53:33 +0000 (0:00:00.190) 0:00:48.073 *********** 2025-06-22 11:53:33.942071 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:33.944351 | orchestrator | 2025-06-22 11:53:33.945229 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:33.946173 | orchestrator | Sunday 22 June 2025 11:53:33 +0000 (0:00:00.655) 0:00:48.729 *********** 2025-06-22 11:53:34.165632 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:34.165818 | orchestrator | 2025-06-22 11:53:34.166902 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:34.169167 | orchestrator | Sunday 22 June 2025 11:53:34 +0000 (0:00:00.222) 0:00:48.951 *********** 2025-06-22 11:53:34.388542 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:34.389749 | orchestrator | 2025-06-22 11:53:34.390164 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:34.391100 | orchestrator | Sunday 22 June 2025 11:53:34 +0000 (0:00:00.224) 0:00:49.175 *********** 2025-06-22 11:53:34.809435 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033) 2025-06-22 11:53:34.810195 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033) 2025-06-22 11:53:34.811546 | orchestrator | 2025-06-22 11:53:34.812522 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:34.814658 | orchestrator | Sunday 22 June 2025 11:53:34 +0000 (0:00:00.421) 0:00:49.597 *********** 2025-06-22 11:53:35.217018 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0234f42c-6d02-44b8-b796-e801f7c6659f) 2025-06-22 11:53:35.217328 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0234f42c-6d02-44b8-b796-e801f7c6659f) 2025-06-22 11:53:35.219128 | orchestrator | 2025-06-22 11:53:35.219154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:35.219561 | orchestrator | Sunday 22 June 2025 11:53:35 +0000 (0:00:00.404) 0:00:50.002 *********** 2025-06-22 11:53:35.629221 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a273c01c-52c4-42f8-a181-d91a87ff3a5e) 2025-06-22 11:53:35.629836 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a273c01c-52c4-42f8-a181-d91a87ff3a5e) 2025-06-22 11:53:35.630794 | orchestrator | 2025-06-22 11:53:35.632105 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:35.632631 | orchestrator | Sunday 22 June 2025 11:53:35 +0000 (0:00:00.413) 0:00:50.416 *********** 2025-06-22 11:53:36.044202 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a129606c-fab1-48ed-9350-9d2eafddbd52) 2025-06-22 11:53:36.045086 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a129606c-fab1-48ed-9350-9d2eafddbd52) 2025-06-22 11:53:36.045756 | orchestrator | 2025-06-22 11:53:36.046542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 11:53:36.047285 | orchestrator | Sunday 22 June 2025 11:53:36 +0000 (0:00:00.415) 0:00:50.831 *********** 2025-06-22 11:53:36.413446 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 11:53:36.414104 | orchestrator | 2025-06-22 11:53:36.414415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:36.415109 | orchestrator | Sunday 22 June 2025 11:53:36 +0000 (0:00:00.368) 0:00:51.199 *********** 2025-06-22 11:53:36.816114 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-22 11:53:36.816193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-22 11:53:36.816207 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-22 11:53:36.816277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-22 11:53:36.817279 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-22 11:53:36.818605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-22 11:53:36.818631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-22 11:53:36.819588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-22 11:53:36.820803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-22 11:53:36.821713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-22 11:53:36.822673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-22 11:53:36.823426 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-22 11:53:36.824079 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-22 11:53:36.824869 | orchestrator | 2025-06-22 11:53:36.825474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:36.826132 | orchestrator | Sunday 22 June 2025 11:53:36 +0000 (0:00:00.402) 0:00:51.601 *********** 2025-06-22 11:53:36.996341 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:36.996653 | orchestrator | 2025-06-22 11:53:36.997524 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:36.998124 | orchestrator | Sunday 22 June 2025 11:53:36 +0000 (0:00:00.182) 0:00:51.784 *********** 2025-06-22 11:53:37.200242 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:37.200932 | orchestrator | 2025-06-22 11:53:37.202126 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:37.203534 | orchestrator | Sunday 22 June 2025 11:53:37 +0000 (0:00:00.203) 0:00:51.987 *********** 2025-06-22 11:53:37.808016 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:37.808399 | orchestrator | 2025-06-22 11:53:37.810011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:37.811717 | orchestrator | Sunday 22 June 2025 11:53:37 +0000 (0:00:00.607) 0:00:52.595 *********** 2025-06-22 11:53:38.021484 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:38.021980 | orchestrator | 2025-06-22 11:53:38.023273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:38.024650 | orchestrator | Sunday 22 June 2025 11:53:38 +0000 (0:00:00.214) 0:00:52.809 *********** 2025-06-22 11:53:38.217047 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:38.217884 | orchestrator | 2025-06-22 11:53:38.219596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:38.221292 | orchestrator | Sunday 22 June 2025 11:53:38 +0000 (0:00:00.195) 0:00:53.004 *********** 2025-06-22 11:53:38.415972 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:38.416209 | orchestrator | 2025-06-22 11:53:38.417215 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:38.419971 | orchestrator | Sunday 22 June 2025 11:53:38 +0000 (0:00:00.198) 0:00:53.203 *********** 2025-06-22 11:53:38.630010 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:38.630448 | orchestrator | 2025-06-22 11:53:38.631704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:38.632419 | orchestrator | Sunday 22 June 2025 11:53:38 +0000 (0:00:00.213) 0:00:53.416 *********** 2025-06-22 11:53:38.818872 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:38.819670 | orchestrator | 2025-06-22 11:53:38.819882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:38.820491 | orchestrator | Sunday 22 June 2025 11:53:38 +0000 (0:00:00.190) 0:00:53.606 *********** 2025-06-22 11:53:39.490216 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-22 11:53:39.490375 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-22 11:53:39.490954 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-22 11:53:39.491848 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-22 11:53:39.493255 | orchestrator | 2025-06-22 11:53:39.493322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:39.493822 | orchestrator | Sunday 22 June 2025 11:53:39 +0000 (0:00:00.669) 0:00:54.276 *********** 2025-06-22 11:53:39.681055 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:39.681463 | orchestrator | 2025-06-22 11:53:39.682856 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:39.683675 | orchestrator | Sunday 22 June 2025 11:53:39 +0000 (0:00:00.192) 0:00:54.469 *********** 2025-06-22 11:53:39.871138 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:39.871894 | orchestrator | 2025-06-22 11:53:39.872519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:39.873243 | orchestrator | Sunday 22 June 2025 11:53:39 +0000 (0:00:00.189) 0:00:54.658 *********** 2025-06-22 11:53:40.069151 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:40.069999 | orchestrator | 2025-06-22 11:53:40.070957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 11:53:40.073617 | orchestrator | Sunday 22 June 2025 11:53:40 +0000 (0:00:00.198) 0:00:54.857 *********** 2025-06-22 11:53:40.280102 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:40.280491 | orchestrator | 2025-06-22 11:53:40.281453 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-22 11:53:40.283645 | orchestrator | Sunday 22 June 2025 11:53:40 +0000 (0:00:00.210) 0:00:55.067 *********** 2025-06-22 11:53:40.608220 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:40.608847 | orchestrator | 2025-06-22 11:53:40.610322 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-22 11:53:40.611708 | orchestrator | Sunday 22 June 2025 11:53:40 +0000 (0:00:00.326) 0:00:55.394 *********** 2025-06-22 11:53:40.784529 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8a4028de-648e-5a19-94a5-5dc0f00dede1'}}) 2025-06-22 11:53:40.784871 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1d622d46-9f3b-5fb0-a039-cce126484330'}}) 2025-06-22 11:53:40.786718 | orchestrator | 2025-06-22 11:53:40.787656 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-22 11:53:40.788552 | orchestrator | Sunday 22 June 2025 11:53:40 +0000 (0:00:00.178) 0:00:55.572 *********** 2025-06-22 11:53:42.622460 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'}) 2025-06-22 11:53:42.622629 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'}) 2025-06-22 11:53:42.623819 | orchestrator | 2025-06-22 11:53:42.626305 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-22 11:53:42.627254 | orchestrator | Sunday 22 June 2025 11:53:42 +0000 (0:00:01.833) 0:00:57.406 *********** 2025-06-22 11:53:42.772548 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'})  2025-06-22 11:53:42.772755 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'})  2025-06-22 11:53:42.772844 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:42.773773 | orchestrator | 2025-06-22 11:53:42.774626 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-22 11:53:42.775471 | orchestrator | Sunday 22 June 2025 11:53:42 +0000 (0:00:00.152) 0:00:57.558 *********** 2025-06-22 11:53:44.070427 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'}) 2025-06-22 11:53:44.071112 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'}) 2025-06-22 11:53:44.071904 | orchestrator | 2025-06-22 11:53:44.072823 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-22 11:53:44.073698 | orchestrator | Sunday 22 June 2025 11:53:44 +0000 (0:00:01.295) 0:00:58.854 *********** 2025-06-22 11:53:44.219983 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'})  2025-06-22 11:53:44.220166 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'})  2025-06-22 11:53:44.221327 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:44.221752 | orchestrator | 2025-06-22 11:53:44.222864 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-22 11:53:44.224490 | orchestrator | Sunday 22 June 2025 11:53:44 +0000 (0:00:00.152) 0:00:59.007 *********** 2025-06-22 11:53:44.349453 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:44.349553 | orchestrator | 2025-06-22 11:53:44.350366 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-22 11:53:44.352958 | orchestrator | Sunday 22 June 2025 11:53:44 +0000 (0:00:00.129) 0:00:59.137 *********** 2025-06-22 11:53:44.503782 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'})  2025-06-22 11:53:44.504458 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'})  2025-06-22 11:53:44.505742 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:44.506731 | orchestrator | 2025-06-22 11:53:44.507509 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-22 11:53:44.508380 | orchestrator | Sunday 22 June 2025 11:53:44 +0000 (0:00:00.153) 0:00:59.291 *********** 2025-06-22 11:53:44.647456 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:44.648862 | orchestrator | 2025-06-22 11:53:44.651013 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-22 11:53:44.651191 | orchestrator | Sunday 22 June 2025 11:53:44 +0000 (0:00:00.143) 0:00:59.434 *********** 2025-06-22 11:53:44.799194 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'})  2025-06-22 11:53:44.800474 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'})  2025-06-22 11:53:44.801722 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:44.803246 | orchestrator | 2025-06-22 11:53:44.803291 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-22 11:53:44.804128 | orchestrator | Sunday 22 June 2025 11:53:44 +0000 (0:00:00.152) 0:00:59.586 *********** 2025-06-22 11:53:44.940979 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:44.942090 | orchestrator | 2025-06-22 11:53:44.942742 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-22 11:53:44.943616 | orchestrator | Sunday 22 June 2025 11:53:44 +0000 (0:00:00.140) 0:00:59.727 *********** 2025-06-22 11:53:45.079021 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'})  2025-06-22 11:53:45.079715 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'})  2025-06-22 11:53:45.080643 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:45.080998 | orchestrator | 2025-06-22 11:53:45.082382 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-22 11:53:45.082416 | orchestrator | Sunday 22 June 2025 11:53:45 +0000 (0:00:00.139) 0:00:59.866 *********** 2025-06-22 11:53:45.215445 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:53:45.215715 | orchestrator | 2025-06-22 11:53:45.217851 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-22 11:53:45.218711 | orchestrator | Sunday 22 June 2025 11:53:45 +0000 (0:00:00.136) 0:01:00.003 *********** 2025-06-22 11:53:45.688918 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'})  2025-06-22 11:53:45.690010 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'})  2025-06-22 11:53:45.691580 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:45.692709 | orchestrator | 2025-06-22 11:53:45.693557 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-22 11:53:45.694659 | orchestrator | Sunday 22 June 2025 11:53:45 +0000 (0:00:00.472) 0:01:00.475 *********** 2025-06-22 11:53:45.840020 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'})  2025-06-22 11:53:45.842383 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'})  2025-06-22 11:53:45.842448 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:45.842463 | orchestrator | 2025-06-22 11:53:45.843490 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-22 11:53:45.843763 | orchestrator | Sunday 22 June 2025 11:53:45 +0000 (0:00:00.151) 0:01:00.627 *********** 2025-06-22 11:53:45.995000 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'})  2025-06-22 11:53:45.996416 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'})  2025-06-22 11:53:45.997209 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:45.998314 | orchestrator | 2025-06-22 11:53:45.999860 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-22 11:53:45.999885 | orchestrator | Sunday 22 June 2025 11:53:45 +0000 (0:00:00.155) 0:01:00.782 *********** 2025-06-22 11:53:46.136421 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:46.136762 | orchestrator | 2025-06-22 11:53:46.137675 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-22 11:53:46.138540 | orchestrator | Sunday 22 June 2025 11:53:46 +0000 (0:00:00.140) 0:01:00.923 *********** 2025-06-22 11:53:46.285493 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:46.285726 | orchestrator | 2025-06-22 11:53:46.287317 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-22 11:53:46.288239 | orchestrator | Sunday 22 June 2025 11:53:46 +0000 (0:00:00.147) 0:01:01.071 *********** 2025-06-22 11:53:46.418134 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:46.419446 | orchestrator | 2025-06-22 11:53:46.421438 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-22 11:53:46.421470 | orchestrator | Sunday 22 June 2025 11:53:46 +0000 (0:00:00.134) 0:01:01.205 *********** 2025-06-22 11:53:46.567634 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 11:53:46.568875 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-22 11:53:46.570100 | orchestrator | } 2025-06-22 11:53:46.570817 | orchestrator | 2025-06-22 11:53:46.571786 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-22 11:53:46.572611 | orchestrator | Sunday 22 June 2025 11:53:46 +0000 (0:00:00.149) 0:01:01.354 *********** 2025-06-22 11:53:46.717169 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 11:53:46.718396 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-22 11:53:46.719401 | orchestrator | } 2025-06-22 11:53:46.720171 | orchestrator | 2025-06-22 11:53:46.720987 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-22 11:53:46.721548 | orchestrator | Sunday 22 June 2025 11:53:46 +0000 (0:00:00.148) 0:01:01.503 *********** 2025-06-22 11:53:46.865316 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 11:53:46.867501 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-22 11:53:46.868239 | orchestrator | } 2025-06-22 11:53:46.869336 | orchestrator | 2025-06-22 11:53:46.870267 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-22 11:53:46.871005 | orchestrator | Sunday 22 June 2025 11:53:46 +0000 (0:00:00.149) 0:01:01.652 *********** 2025-06-22 11:53:47.400172 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:53:47.400646 | orchestrator | 2025-06-22 11:53:47.401361 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-22 11:53:47.402344 | orchestrator | Sunday 22 June 2025 11:53:47 +0000 (0:00:00.535) 0:01:02.187 *********** 2025-06-22 11:53:47.882889 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:53:47.884599 | orchestrator | 2025-06-22 11:53:47.885195 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-22 11:53:47.886489 | orchestrator | Sunday 22 June 2025 11:53:47 +0000 (0:00:00.481) 0:01:02.669 *********** 2025-06-22 11:53:48.392649 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:53:48.392758 | orchestrator | 2025-06-22 11:53:48.392774 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-22 11:53:48.393224 | orchestrator | Sunday 22 June 2025 11:53:48 +0000 (0:00:00.508) 0:01:03.177 *********** 2025-06-22 11:53:48.815401 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:53:48.816308 | orchestrator | 2025-06-22 11:53:48.817419 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-22 11:53:48.817937 | orchestrator | Sunday 22 June 2025 11:53:48 +0000 (0:00:00.424) 0:01:03.601 *********** 2025-06-22 11:53:48.944018 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:48.945332 | orchestrator | 2025-06-22 11:53:48.946133 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-22 11:53:48.946968 | orchestrator | Sunday 22 June 2025 11:53:48 +0000 (0:00:00.127) 0:01:03.729 *********** 2025-06-22 11:53:49.055460 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:49.056072 | orchestrator | 2025-06-22 11:53:49.057456 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-22 11:53:49.058878 | orchestrator | Sunday 22 June 2025 11:53:49 +0000 (0:00:00.112) 0:01:03.842 *********** 2025-06-22 11:53:49.204493 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 11:53:49.205969 | orchestrator |  "vgs_report": { 2025-06-22 11:53:49.207133 | orchestrator |  "vg": [] 2025-06-22 11:53:49.208842 | orchestrator |  } 2025-06-22 11:53:49.209984 | orchestrator | } 2025-06-22 11:53:49.211201 | orchestrator | 2025-06-22 11:53:49.211674 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-22 11:53:49.212676 | orchestrator | Sunday 22 June 2025 11:53:49 +0000 (0:00:00.148) 0:01:03.990 *********** 2025-06-22 11:53:49.346125 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:49.347421 | orchestrator | 2025-06-22 11:53:49.348227 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-22 11:53:49.350929 | orchestrator | Sunday 22 June 2025 11:53:49 +0000 (0:00:00.138) 0:01:04.128 *********** 2025-06-22 11:53:49.484993 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:49.485085 | orchestrator | 2025-06-22 11:53:49.485185 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-22 11:53:49.486148 | orchestrator | Sunday 22 June 2025 11:53:49 +0000 (0:00:00.142) 0:01:04.271 *********** 2025-06-22 11:53:49.629492 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:49.629621 | orchestrator | 2025-06-22 11:53:49.629729 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-22 11:53:49.630163 | orchestrator | Sunday 22 June 2025 11:53:49 +0000 (0:00:00.141) 0:01:04.412 *********** 2025-06-22 11:53:49.780880 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:49.781888 | orchestrator | 2025-06-22 11:53:49.782937 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-22 11:53:49.783800 | orchestrator | Sunday 22 June 2025 11:53:49 +0000 (0:00:00.153) 0:01:04.566 *********** 2025-06-22 11:53:49.928097 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:49.928898 | orchestrator | 2025-06-22 11:53:49.929924 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-22 11:53:49.931025 | orchestrator | Sunday 22 June 2025 11:53:49 +0000 (0:00:00.145) 0:01:04.712 *********** 2025-06-22 11:53:50.070970 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:50.071603 | orchestrator | 2025-06-22 11:53:50.073006 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-22 11:53:50.073762 | orchestrator | Sunday 22 June 2025 11:53:50 +0000 (0:00:00.145) 0:01:04.857 *********** 2025-06-22 11:53:50.207640 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:50.207746 | orchestrator | 2025-06-22 11:53:50.208296 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-22 11:53:50.209201 | orchestrator | Sunday 22 June 2025 11:53:50 +0000 (0:00:00.135) 0:01:04.993 *********** 2025-06-22 11:53:50.339260 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:50.339884 | orchestrator | 2025-06-22 11:53:50.340609 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-22 11:53:50.341499 | orchestrator | Sunday 22 June 2025 11:53:50 +0000 (0:00:00.133) 0:01:05.126 *********** 2025-06-22 11:53:50.660653 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:50.661120 | orchestrator | 2025-06-22 11:53:50.662250 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-22 11:53:50.662851 | orchestrator | Sunday 22 June 2025 11:53:50 +0000 (0:00:00.320) 0:01:05.447 *********** 2025-06-22 11:53:50.801447 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:50.801816 | orchestrator | 2025-06-22 11:53:50.803864 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-22 11:53:50.804125 | orchestrator | Sunday 22 June 2025 11:53:50 +0000 (0:00:00.135) 0:01:05.583 *********** 2025-06-22 11:53:50.933986 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:50.934198 | orchestrator | 2025-06-22 11:53:50.934602 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-22 11:53:50.935332 | orchestrator | Sunday 22 June 2025 11:53:50 +0000 (0:00:00.138) 0:01:05.722 *********** 2025-06-22 11:53:51.087900 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:51.088560 | orchestrator | 2025-06-22 11:53:51.089609 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-22 11:53:51.090878 | orchestrator | Sunday 22 June 2025 11:53:51 +0000 (0:00:00.152) 0:01:05.875 *********** 2025-06-22 11:53:51.236053 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:51.236801 | orchestrator | 2025-06-22 11:53:51.237031 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-22 11:53:51.237124 | orchestrator | Sunday 22 June 2025 11:53:51 +0000 (0:00:00.147) 0:01:06.022 *********** 2025-06-22 11:53:51.368318 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:51.369235 | orchestrator | 2025-06-22 11:53:51.370630 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-22 11:53:51.371408 | orchestrator | Sunday 22 June 2025 11:53:51 +0000 (0:00:00.132) 0:01:06.155 *********** 2025-06-22 11:53:51.522804 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'})  2025-06-22 11:53:51.523678 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'})  2025-06-22 11:53:51.524538 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:51.525732 | orchestrator | 2025-06-22 11:53:51.527108 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-22 11:53:51.527711 | orchestrator | Sunday 22 June 2025 11:53:51 +0000 (0:00:00.154) 0:01:06.309 *********** 2025-06-22 11:53:51.699440 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'})  2025-06-22 11:53:51.699638 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'})  2025-06-22 11:53:51.699766 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:51.700058 | orchestrator | 2025-06-22 11:53:51.700486 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-22 11:53:51.700821 | orchestrator | Sunday 22 June 2025 11:53:51 +0000 (0:00:00.175) 0:01:06.485 *********** 2025-06-22 11:53:51.843183 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'})  2025-06-22 11:53:51.843624 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'})  2025-06-22 11:53:51.844101 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:51.844684 | orchestrator | 2025-06-22 11:53:51.845082 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-22 11:53:51.845816 | orchestrator | Sunday 22 June 2025 11:53:51 +0000 (0:00:00.144) 0:01:06.629 *********** 2025-06-22 11:53:51.994084 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'})  2025-06-22 11:53:51.995731 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'})  2025-06-22 11:53:51.996502 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:51.997612 | orchestrator | 2025-06-22 11:53:51.998723 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-22 11:53:51.999377 | orchestrator | Sunday 22 June 2025 11:53:51 +0000 (0:00:00.151) 0:01:06.781 *********** 2025-06-22 11:53:52.157980 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'})  2025-06-22 11:53:52.158446 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'})  2025-06-22 11:53:52.158715 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:52.160170 | orchestrator | 2025-06-22 11:53:52.160723 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-22 11:53:52.161285 | orchestrator | Sunday 22 June 2025 11:53:52 +0000 (0:00:00.163) 0:01:06.944 *********** 2025-06-22 11:53:52.326433 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'})  2025-06-22 11:53:52.326699 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'})  2025-06-22 11:53:52.327638 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:52.328263 | orchestrator | 2025-06-22 11:53:52.330413 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-22 11:53:52.330446 | orchestrator | Sunday 22 June 2025 11:53:52 +0000 (0:00:00.167) 0:01:07.112 *********** 2025-06-22 11:53:52.734128 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'})  2025-06-22 11:53:52.734263 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'})  2025-06-22 11:53:52.734281 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:52.734405 | orchestrator | 2025-06-22 11:53:52.735748 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-22 11:53:52.736185 | orchestrator | Sunday 22 June 2025 11:53:52 +0000 (0:00:00.405) 0:01:07.518 *********** 2025-06-22 11:53:52.887532 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'})  2025-06-22 11:53:52.888531 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'})  2025-06-22 11:53:52.889541 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:52.891117 | orchestrator | 2025-06-22 11:53:52.892056 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-22 11:53:52.893046 | orchestrator | Sunday 22 June 2025 11:53:52 +0000 (0:00:00.154) 0:01:07.673 *********** 2025-06-22 11:53:53.380283 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:53:53.380380 | orchestrator | 2025-06-22 11:53:53.381467 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-22 11:53:53.382263 | orchestrator | Sunday 22 June 2025 11:53:53 +0000 (0:00:00.494) 0:01:08.167 *********** 2025-06-22 11:53:53.899466 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:53:53.900363 | orchestrator | 2025-06-22 11:53:53.901342 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-22 11:53:53.902389 | orchestrator | Sunday 22 June 2025 11:53:53 +0000 (0:00:00.518) 0:01:08.686 *********** 2025-06-22 11:53:54.047179 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:53:54.047285 | orchestrator | 2025-06-22 11:53:54.047748 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-22 11:53:54.048199 | orchestrator | Sunday 22 June 2025 11:53:54 +0000 (0:00:00.146) 0:01:08.832 *********** 2025-06-22 11:53:54.217196 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'vg_name': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'}) 2025-06-22 11:53:54.219888 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'vg_name': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'}) 2025-06-22 11:53:54.219924 | orchestrator | 2025-06-22 11:53:54.220775 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-22 11:53:54.221764 | orchestrator | Sunday 22 June 2025 11:53:54 +0000 (0:00:00.170) 0:01:09.003 *********** 2025-06-22 11:53:54.381396 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'})  2025-06-22 11:53:54.383869 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'})  2025-06-22 11:53:54.383903 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:54.384528 | orchestrator | 2025-06-22 11:53:54.384955 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-22 11:53:54.386545 | orchestrator | Sunday 22 June 2025 11:53:54 +0000 (0:00:00.160) 0:01:09.164 *********** 2025-06-22 11:53:54.531246 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'})  2025-06-22 11:53:54.531434 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'})  2025-06-22 11:53:54.532162 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:54.532687 | orchestrator | 2025-06-22 11:53:54.534128 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-22 11:53:54.535979 | orchestrator | Sunday 22 June 2025 11:53:54 +0000 (0:00:00.154) 0:01:09.318 *********** 2025-06-22 11:53:54.683191 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'})  2025-06-22 11:53:54.683373 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'})  2025-06-22 11:53:54.684130 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:53:54.685322 | orchestrator | 2025-06-22 11:53:54.685713 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-22 11:53:54.686486 | orchestrator | Sunday 22 June 2025 11:53:54 +0000 (0:00:00.152) 0:01:09.470 *********** 2025-06-22 11:53:54.831305 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 11:53:54.832661 | orchestrator |  "lvm_report": { 2025-06-22 11:53:54.833543 | orchestrator |  "lv": [ 2025-06-22 11:53:54.835163 | orchestrator |  { 2025-06-22 11:53:54.836912 | orchestrator |  "lv_name": "osd-block-1d622d46-9f3b-5fb0-a039-cce126484330", 2025-06-22 11:53:54.837821 | orchestrator |  "vg_name": "ceph-1d622d46-9f3b-5fb0-a039-cce126484330" 2025-06-22 11:53:54.839102 | orchestrator |  }, 2025-06-22 11:53:54.839505 | orchestrator |  { 2025-06-22 11:53:54.840681 | orchestrator |  "lv_name": "osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1", 2025-06-22 11:53:54.841307 | orchestrator |  "vg_name": "ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1" 2025-06-22 11:53:54.842532 | orchestrator |  } 2025-06-22 11:53:54.843141 | orchestrator |  ], 2025-06-22 11:53:54.843663 | orchestrator |  "pv": [ 2025-06-22 11:53:54.844222 | orchestrator |  { 2025-06-22 11:53:54.845148 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-22 11:53:54.845857 | orchestrator |  "vg_name": "ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1" 2025-06-22 11:53:54.846309 | orchestrator |  }, 2025-06-22 11:53:54.847109 | orchestrator |  { 2025-06-22 11:53:54.847517 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-22 11:53:54.848352 | orchestrator |  "vg_name": "ceph-1d622d46-9f3b-5fb0-a039-cce126484330" 2025-06-22 11:53:54.849022 | orchestrator |  } 2025-06-22 11:53:54.850053 | orchestrator |  ] 2025-06-22 11:53:54.850922 | orchestrator |  } 2025-06-22 11:53:54.851520 | orchestrator | } 2025-06-22 11:53:54.852051 | orchestrator | 2025-06-22 11:53:54.852655 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:53:54.853172 | orchestrator | 2025-06-22 11:53:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:53:54.853422 | orchestrator | 2025-06-22 11:53:54 | INFO  | Please wait and do not abort execution. 2025-06-22 11:53:54.853956 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-22 11:53:54.854429 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-22 11:53:54.854875 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-22 11:53:54.856034 | orchestrator | 2025-06-22 11:53:54.856927 | orchestrator | 2025-06-22 11:53:54.857365 | orchestrator | 2025-06-22 11:53:54.859266 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:53:54.860217 | orchestrator | Sunday 22 June 2025 11:53:54 +0000 (0:00:00.147) 0:01:09.618 *********** 2025-06-22 11:53:54.860868 | orchestrator | =============================================================================== 2025-06-22 11:53:54.861289 | orchestrator | Create block VGs -------------------------------------------------------- 5.57s 2025-06-22 11:53:54.861922 | orchestrator | Create block LVs -------------------------------------------------------- 4.00s 2025-06-22 11:53:54.862334 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.91s 2025-06-22 11:53:54.862730 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.53s 2025-06-22 11:53:54.863262 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.51s 2025-06-22 11:53:54.863633 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.51s 2025-06-22 11:53:54.864355 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.49s 2025-06-22 11:53:54.864583 | orchestrator | Add known partitions to the list of available block devices ------------- 1.34s 2025-06-22 11:53:54.864972 | orchestrator | Add known links to the list of available block devices ------------------ 1.15s 2025-06-22 11:53:54.865397 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2025-06-22 11:53:54.865802 | orchestrator | Print LVM report data --------------------------------------------------- 0.92s 2025-06-22 11:53:54.866353 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-06-22 11:53:54.866780 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.77s 2025-06-22 11:53:54.867106 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.73s 2025-06-22 11:53:54.867669 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.71s 2025-06-22 11:53:54.868083 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.71s 2025-06-22 11:53:54.868487 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.67s 2025-06-22 11:53:54.869025 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.67s 2025-06-22 11:53:54.869410 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2025-06-22 11:53:54.869873 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-06-22 11:53:57.220747 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:53:57.220864 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:53:57.220887 | orchestrator | Registering Redlock._release_script 2025-06-22 11:53:57.306781 | orchestrator | 2025-06-22 11:53:57 | INFO  | Task 73c103c4-dcb5-422c-8737-5a4625464757 (facts) was prepared for execution. 2025-06-22 11:53:57.306871 | orchestrator | 2025-06-22 11:53:57 | INFO  | It takes a moment until task 73c103c4-dcb5-422c-8737-5a4625464757 (facts) has been started and output is visible here. 2025-06-22 11:54:01.326762 | orchestrator | 2025-06-22 11:54:01.327248 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-22 11:54:01.328758 | orchestrator | 2025-06-22 11:54:01.333681 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-22 11:54:01.334229 | orchestrator | Sunday 22 June 2025 11:54:01 +0000 (0:00:00.257) 0:00:00.257 *********** 2025-06-22 11:54:02.432521 | orchestrator | ok: [testbed-manager] 2025-06-22 11:54:02.433522 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:54:02.435015 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:54:02.436026 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:54:02.437076 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:54:02.437985 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:54:02.438639 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:54:02.439449 | orchestrator | 2025-06-22 11:54:02.440153 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-22 11:54:02.440853 | orchestrator | Sunday 22 June 2025 11:54:02 +0000 (0:00:01.103) 0:00:01.360 *********** 2025-06-22 11:54:02.593117 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:54:02.670260 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:54:02.747952 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:54:02.824948 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:54:02.901203 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:54:03.623699 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:54:03.624943 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:54:03.625472 | orchestrator | 2025-06-22 11:54:03.625691 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 11:54:03.626209 | orchestrator | 2025-06-22 11:54:03.627029 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 11:54:03.627836 | orchestrator | Sunday 22 June 2025 11:54:03 +0000 (0:00:01.196) 0:00:02.557 *********** 2025-06-22 11:54:08.368925 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:54:08.370732 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:54:08.373793 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:54:08.373864 | orchestrator | ok: [testbed-manager] 2025-06-22 11:54:08.373887 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:54:08.375007 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:54:08.376072 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:54:08.377222 | orchestrator | 2025-06-22 11:54:08.378660 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-22 11:54:08.379224 | orchestrator | 2025-06-22 11:54:08.380370 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-22 11:54:08.380986 | orchestrator | Sunday 22 June 2025 11:54:08 +0000 (0:00:04.745) 0:00:07.302 *********** 2025-06-22 11:54:08.530741 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:54:08.606594 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:54:08.678343 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:54:08.754879 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:54:08.832989 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:54:08.873165 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:54:08.873709 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:54:08.874521 | orchestrator | 2025-06-22 11:54:08.875854 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:54:08.875897 | orchestrator | 2025-06-22 11:54:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 11:54:08.875912 | orchestrator | 2025-06-22 11:54:08 | INFO  | Please wait and do not abort execution. 2025-06-22 11:54:08.876461 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:54:08.877433 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:54:08.878941 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:54:08.879837 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:54:08.881397 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:54:08.881720 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:54:08.882201 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 11:54:08.882633 | orchestrator | 2025-06-22 11:54:08.883062 | orchestrator | 2025-06-22 11:54:08.883548 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:54:08.883875 | orchestrator | Sunday 22 June 2025 11:54:08 +0000 (0:00:00.505) 0:00:07.808 *********** 2025-06-22 11:54:08.884253 | orchestrator | =============================================================================== 2025-06-22 11:54:08.884759 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.75s 2025-06-22 11:54:08.885089 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.20s 2025-06-22 11:54:08.885831 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.10s 2025-06-22 11:54:08.885953 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-06-22 11:54:09.490858 | orchestrator | 2025-06-22 11:54:09.494202 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Jun 22 11:54:09 UTC 2025 2025-06-22 11:54:09.494247 | orchestrator | 2025-06-22 11:54:11.118104 | orchestrator | 2025-06-22 11:54:11 | INFO  | Collection nutshell is prepared for execution 2025-06-22 11:54:11.118209 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [0] - dotfiles 2025-06-22 11:54:11.122873 | orchestrator | Registering Redlock._acquired_script 2025-06-22 11:54:11.122984 | orchestrator | Registering Redlock._extend_script 2025-06-22 11:54:11.123002 | orchestrator | Registering Redlock._release_script 2025-06-22 11:54:11.127983 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [0] - homer 2025-06-22 11:54:11.128045 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [0] - netdata 2025-06-22 11:54:11.128127 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [0] - openstackclient 2025-06-22 11:54:11.128143 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [0] - phpmyadmin 2025-06-22 11:54:11.128180 | orchestrator | 2025-06-22 11:54:11 | INFO  | A [0] - common 2025-06-22 11:54:11.129845 | orchestrator | 2025-06-22 11:54:11 | INFO  | A [1] -- loadbalancer 2025-06-22 11:54:11.129867 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [2] --- opensearch 2025-06-22 11:54:11.129879 | orchestrator | 2025-06-22 11:54:11 | INFO  | A [2] --- mariadb-ng 2025-06-22 11:54:11.129890 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [3] ---- horizon 2025-06-22 11:54:11.130217 | orchestrator | 2025-06-22 11:54:11 | INFO  | A [3] ---- keystone 2025-06-22 11:54:11.130238 | orchestrator | 2025-06-22 11:54:11 | INFO  | A [4] ----- neutron 2025-06-22 11:54:11.130250 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [5] ------ wait-for-nova 2025-06-22 11:54:11.130263 | orchestrator | 2025-06-22 11:54:11 | INFO  | A [5] ------ octavia 2025-06-22 11:54:11.130699 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [4] ----- barbican 2025-06-22 11:54:11.130738 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [4] ----- designate 2025-06-22 11:54:11.130982 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [4] ----- ironic 2025-06-22 11:54:11.131000 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [4] ----- placement 2025-06-22 11:54:11.131011 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [4] ----- magnum 2025-06-22 11:54:11.131322 | orchestrator | 2025-06-22 11:54:11 | INFO  | A [1] -- openvswitch 2025-06-22 11:54:11.131344 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [2] --- ovn 2025-06-22 11:54:11.131544 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [1] -- memcached 2025-06-22 11:54:11.131563 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [1] -- redis 2025-06-22 11:54:11.131835 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [1] -- rabbitmq-ng 2025-06-22 11:54:11.131853 | orchestrator | 2025-06-22 11:54:11 | INFO  | A [0] - kubernetes 2025-06-22 11:54:11.133523 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [1] -- kubeconfig 2025-06-22 11:54:11.133748 | orchestrator | 2025-06-22 11:54:11 | INFO  | A [1] -- copy-kubeconfig 2025-06-22 11:54:11.133768 | orchestrator | 2025-06-22 11:54:11 | INFO  | A [0] - ceph 2025-06-22 11:54:11.135250 | orchestrator | 2025-06-22 11:54:11 | INFO  | A [1] -- ceph-pools 2025-06-22 11:54:11.135273 | orchestrator | 2025-06-22 11:54:11 | INFO  | A [2] --- copy-ceph-keys 2025-06-22 11:54:11.135285 | orchestrator | 2025-06-22 11:54:11 | INFO  | A [3] ---- cephclient 2025-06-22 11:54:11.135492 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-06-22 11:54:11.135519 | orchestrator | 2025-06-22 11:54:11 | INFO  | A [4] ----- wait-for-keystone 2025-06-22 11:54:11.135531 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [5] ------ kolla-ceph-rgw 2025-06-22 11:54:11.135542 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [5] ------ glance 2025-06-22 11:54:11.135553 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [5] ------ cinder 2025-06-22 11:54:11.135677 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [5] ------ nova 2025-06-22 11:54:11.135874 | orchestrator | 2025-06-22 11:54:11 | INFO  | A [4] ----- prometheus 2025-06-22 11:54:11.135893 | orchestrator | 2025-06-22 11:54:11 | INFO  | D [5] ------ grafana 2025-06-22 11:54:11.334673 | orchestrator | 2025-06-22 11:54:11 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-06-22 11:54:11.335218 | orchestrator | 2025-06-22 11:54:11 | INFO  | Tasks are running in the background 2025-06-22 11:54:13.712914 | orchestrator | 2025-06-22 11:54:13 | INFO  | No task IDs specified, wait for all currently running tasks 2025-06-22 11:54:15.856711 | orchestrator | 2025-06-22 11:54:15 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:54:15.857515 | orchestrator | 2025-06-22 11:54:15 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state STARTED 2025-06-22 11:54:15.857930 | orchestrator | 2025-06-22 11:54:15 | INFO  | Task 951e4307-35f8-49b1-8e0f-f3b6df05aa36 is in state STARTED 2025-06-22 11:54:15.865188 | orchestrator | 2025-06-22 11:54:15 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:54:15.865751 | orchestrator | 2025-06-22 11:54:15 | INFO  | Task 6b5554c4-1f7e-4efb-8dab-be6f621b74e6 is in state STARTED 2025-06-22 11:54:15.866548 | orchestrator | 2025-06-22 11:54:15 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:54:15.867269 | orchestrator | 2025-06-22 11:54:15 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:54:15.867289 | orchestrator | 2025-06-22 11:54:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:54:18.924052 | orchestrator | 2025-06-22 11:54:18 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:54:18.925178 | orchestrator | 2025-06-22 11:54:18 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state STARTED 2025-06-22 11:54:18.929092 | orchestrator | 2025-06-22 11:54:18 | INFO  | Task 951e4307-35f8-49b1-8e0f-f3b6df05aa36 is in state STARTED 2025-06-22 11:54:18.932891 | orchestrator | 2025-06-22 11:54:18 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:54:18.933529 | orchestrator | 2025-06-22 11:54:18 | INFO  | Task 6b5554c4-1f7e-4efb-8dab-be6f621b74e6 is in state STARTED 2025-06-22 11:54:18.934382 | orchestrator | 2025-06-22 11:54:18 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:54:18.935136 | orchestrator | 2025-06-22 11:54:18 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:54:18.935353 | orchestrator | 2025-06-22 11:54:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:54:21.969347 | orchestrator | 2025-06-22 11:54:21 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:54:21.972913 | orchestrator | 2025-06-22 11:54:21 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state STARTED 2025-06-22 11:54:21.972943 | orchestrator | 2025-06-22 11:54:21 | INFO  | Task 951e4307-35f8-49b1-8e0f-f3b6df05aa36 is in state STARTED 2025-06-22 11:54:21.978476 | orchestrator | 2025-06-22 11:54:21 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:54:21.978855 | orchestrator | 2025-06-22 11:54:21 | INFO  | Task 6b5554c4-1f7e-4efb-8dab-be6f621b74e6 is in state STARTED 2025-06-22 11:54:21.980193 | orchestrator | 2025-06-22 11:54:21 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:54:21.980956 | orchestrator | 2025-06-22 11:54:21 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:54:21.981063 | orchestrator | 2025-06-22 11:54:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:54:25.073165 | orchestrator | 2025-06-22 11:54:25 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:54:25.073661 | orchestrator | 2025-06-22 11:54:25 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state STARTED 2025-06-22 11:54:25.076500 | orchestrator | 2025-06-22 11:54:25 | INFO  | Task 951e4307-35f8-49b1-8e0f-f3b6df05aa36 is in state STARTED 2025-06-22 11:54:25.078465 | orchestrator | 2025-06-22 11:54:25 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:54:25.078852 | orchestrator | 2025-06-22 11:54:25 | INFO  | Task 6b5554c4-1f7e-4efb-8dab-be6f621b74e6 is in state STARTED 2025-06-22 11:54:25.082537 | orchestrator | 2025-06-22 11:54:25 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:54:25.082559 | orchestrator | 2025-06-22 11:54:25 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:54:25.082590 | orchestrator | 2025-06-22 11:54:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:54:28.122001 | orchestrator | 2025-06-22 11:54:28 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:54:28.122440 | orchestrator | 2025-06-22 11:54:28 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state STARTED 2025-06-22 11:54:28.123838 | orchestrator | 2025-06-22 11:54:28 | INFO  | Task 951e4307-35f8-49b1-8e0f-f3b6df05aa36 is in state STARTED 2025-06-22 11:54:28.125327 | orchestrator | 2025-06-22 11:54:28 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:54:28.128787 | orchestrator | 2025-06-22 11:54:28 | INFO  | Task 6b5554c4-1f7e-4efb-8dab-be6f621b74e6 is in state STARTED 2025-06-22 11:54:28.131044 | orchestrator | 2025-06-22 11:54:28 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:54:28.132766 | orchestrator | 2025-06-22 11:54:28 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:54:28.132837 | orchestrator | 2025-06-22 11:54:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:54:31.190430 | orchestrator | 2025-06-22 11:54:31 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:54:31.195171 | orchestrator | 2025-06-22 11:54:31 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state STARTED 2025-06-22 11:54:31.197212 | orchestrator | 2025-06-22 11:54:31 | INFO  | Task 951e4307-35f8-49b1-8e0f-f3b6df05aa36 is in state STARTED 2025-06-22 11:54:31.200557 | orchestrator | 2025-06-22 11:54:31 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:54:31.205978 | orchestrator | 2025-06-22 11:54:31 | INFO  | Task 6b5554c4-1f7e-4efb-8dab-be6f621b74e6 is in state STARTED 2025-06-22 11:54:31.208759 | orchestrator | 2025-06-22 11:54:31 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:54:31.211053 | orchestrator | 2025-06-22 11:54:31 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:54:31.214748 | orchestrator | 2025-06-22 11:54:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:54:34.270948 | orchestrator | 2025-06-22 11:54:34 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:54:34.277035 | orchestrator | 2025-06-22 11:54:34 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state STARTED 2025-06-22 11:54:34.284742 | orchestrator | 2025-06-22 11:54:34 | INFO  | Task 951e4307-35f8-49b1-8e0f-f3b6df05aa36 is in state STARTED 2025-06-22 11:54:34.284783 | orchestrator | 2025-06-22 11:54:34 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:54:34.285859 | orchestrator | 2025-06-22 11:54:34 | INFO  | Task 6b5554c4-1f7e-4efb-8dab-be6f621b74e6 is in state STARTED 2025-06-22 11:54:34.286597 | orchestrator | 2025-06-22 11:54:34 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:54:34.293639 | orchestrator | 2025-06-22 11:54:34 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:54:34.293684 | orchestrator | 2025-06-22 11:54:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:54:37.359177 | orchestrator | 2025-06-22 11:54:37 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:54:37.359259 | orchestrator | 2025-06-22 11:54:37 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state STARTED 2025-06-22 11:54:37.359274 | orchestrator | 2025-06-22 11:54:37 | INFO  | Task 951e4307-35f8-49b1-8e0f-f3b6df05aa36 is in state STARTED 2025-06-22 11:54:37.359286 | orchestrator | 2025-06-22 11:54:37 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:54:37.360593 | orchestrator | 2025-06-22 11:54:37 | INFO  | Task 6b5554c4-1f7e-4efb-8dab-be6f621b74e6 is in state STARTED 2025-06-22 11:54:37.361534 | orchestrator | 2025-06-22 11:54:37 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:54:37.366501 | orchestrator | 2025-06-22 11:54:37 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:54:37.366597 | orchestrator | 2025-06-22 11:54:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:54:40.406086 | orchestrator | 2025-06-22 11:54:40 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:54:40.418592 | orchestrator | 2025-06-22 11:54:40.418656 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-06-22 11:54:40.418672 | orchestrator | 2025-06-22 11:54:40.418684 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-06-22 11:54:40.418695 | orchestrator | Sunday 22 June 2025 11:54:22 +0000 (0:00:00.765) 0:00:00.765 *********** 2025-06-22 11:54:40.418707 | orchestrator | changed: [testbed-manager] 2025-06-22 11:54:40.418718 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:54:40.418729 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:54:40.418740 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:54:40.418751 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:54:40.418761 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:54:40.418772 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:54:40.418783 | orchestrator | 2025-06-22 11:54:40.418794 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-06-22 11:54:40.418805 | orchestrator | Sunday 22 June 2025 11:54:26 +0000 (0:00:04.444) 0:00:05.210 *********** 2025-06-22 11:54:40.418816 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-22 11:54:40.418827 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-22 11:54:40.418838 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-22 11:54:40.418848 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-22 11:54:40.418859 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-22 11:54:40.418870 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-22 11:54:40.418880 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-22 11:54:40.418892 | orchestrator | 2025-06-22 11:54:40.418903 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-06-22 11:54:40.418914 | orchestrator | Sunday 22 June 2025 11:54:28 +0000 (0:00:01.780) 0:00:06.990 *********** 2025-06-22 11:54:40.418937 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 11:54:27.438195', 'end': '2025-06-22 11:54:27.447647', 'delta': '0:00:00.009452', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 11:54:40.418971 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 11:54:27.405273', 'end': '2025-06-22 11:54:27.409367', 'delta': '0:00:00.004094', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 11:54:40.418984 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 11:54:27.455339', 'end': '2025-06-22 11:54:27.462061', 'delta': '0:00:00.006722', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 11:54:40.419016 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 11:54:27.679305', 'end': '2025-06-22 11:54:27.687471', 'delta': '0:00:00.008166', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 11:54:40.419029 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 11:54:27.807204', 'end': '2025-06-22 11:54:27.815927', 'delta': '0:00:00.008723', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 11:54:40.419044 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 11:54:28.011407', 'end': '2025-06-22 11:54:28.019851', 'delta': '0:00:00.008444', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 11:54:40.419068 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 11:54:28.142231', 'end': '2025-06-22 11:54:28.147802', 'delta': '0:00:00.005571', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 11:54:40.419079 | orchestrator | 2025-06-22 11:54:40.419091 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-06-22 11:54:40.419102 | orchestrator | Sunday 22 June 2025 11:54:30 +0000 (0:00:02.532) 0:00:09.523 *********** 2025-06-22 11:54:40.419113 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-22 11:54:40.419124 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-22 11:54:40.419135 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-22 11:54:40.419145 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-22 11:54:40.419156 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-22 11:54:40.419169 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-22 11:54:40.419181 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-22 11:54:40.419193 | orchestrator | 2025-06-22 11:54:40.419206 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-06-22 11:54:40.419218 | orchestrator | Sunday 22 June 2025 11:54:33 +0000 (0:00:02.274) 0:00:11.798 *********** 2025-06-22 11:54:40.419230 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-06-22 11:54:40.419243 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-06-22 11:54:40.419255 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-06-22 11:54:40.419266 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-06-22 11:54:40.419278 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-06-22 11:54:40.419290 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-06-22 11:54:40.419302 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-06-22 11:54:40.419314 | orchestrator | 2025-06-22 11:54:40.419326 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:54:40.419345 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:54:40.419358 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:54:40.419371 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:54:40.419384 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:54:40.419396 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:54:40.419408 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:54:40.419420 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:54:40.419438 | orchestrator | 2025-06-22 11:54:40.419451 | orchestrator | 2025-06-22 11:54:40.419463 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:54:40.419476 | orchestrator | Sunday 22 June 2025 11:54:37 +0000 (0:00:04.151) 0:00:15.950 *********** 2025-06-22 11:54:40.419488 | orchestrator | =============================================================================== 2025-06-22 11:54:40.419500 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.44s 2025-06-22 11:54:40.419511 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.15s 2025-06-22 11:54:40.419521 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.53s 2025-06-22 11:54:40.419532 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.27s 2025-06-22 11:54:40.419543 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.78s 2025-06-22 11:54:40.419605 | orchestrator | 2025-06-22 11:54:40 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state STARTED 2025-06-22 11:54:40.419621 | orchestrator | 2025-06-22 11:54:40 | INFO  | Task 951e4307-35f8-49b1-8e0f-f3b6df05aa36 is in state STARTED 2025-06-22 11:54:40.419632 | orchestrator | 2025-06-22 11:54:40 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:54:40.419643 | orchestrator | 2025-06-22 11:54:40 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:54:40.419654 | orchestrator | 2025-06-22 11:54:40 | INFO  | Task 6b5554c4-1f7e-4efb-8dab-be6f621b74e6 is in state SUCCESS 2025-06-22 11:54:40.421147 | orchestrator | 2025-06-22 11:54:40 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:54:40.426540 | orchestrator | 2025-06-22 11:54:40 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:54:40.426615 | orchestrator | 2025-06-22 11:54:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:54:43.487544 | orchestrator | 2025-06-22 11:54:43 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:54:43.487684 | orchestrator | 2025-06-22 11:54:43 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state STARTED 2025-06-22 11:54:43.487699 | orchestrator | 2025-06-22 11:54:43 | INFO  | Task 951e4307-35f8-49b1-8e0f-f3b6df05aa36 is in state STARTED 2025-06-22 11:54:43.487711 | orchestrator | 2025-06-22 11:54:43 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:54:43.493623 | orchestrator | 2025-06-22 11:54:43 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:54:43.493652 | orchestrator | 2025-06-22 11:54:43 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:54:43.493663 | orchestrator | 2025-06-22 11:54:43 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:54:43.493675 | orchestrator | 2025-06-22 11:54:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:54:46.543332 | orchestrator | 2025-06-22 11:54:46 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:54:46.543477 | orchestrator | 2025-06-22 11:54:46 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state STARTED 2025-06-22 11:54:46.543874 | orchestrator | 2025-06-22 11:54:46 | INFO  | Task 951e4307-35f8-49b1-8e0f-f3b6df05aa36 is in state STARTED 2025-06-22 11:54:46.544394 | orchestrator | 2025-06-22 11:54:46 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:54:46.544796 | orchestrator | 2025-06-22 11:54:46 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:54:46.545289 | orchestrator | 2025-06-22 11:54:46 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:54:46.545933 | orchestrator | 2025-06-22 11:54:46 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:54:46.545960 | orchestrator | 2025-06-22 11:54:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:54:49.592692 | orchestrator | 2025-06-22 11:54:49 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:54:49.592803 | orchestrator | 2025-06-22 11:54:49 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state STARTED 2025-06-22 11:54:49.592829 | orchestrator | 2025-06-22 11:54:49 | INFO  | Task 951e4307-35f8-49b1-8e0f-f3b6df05aa36 is in state STARTED 2025-06-22 11:54:49.594433 | orchestrator | 2025-06-22 11:54:49 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:54:49.595013 | orchestrator | 2025-06-22 11:54:49 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:54:49.596227 | orchestrator | 2025-06-22 11:54:49 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:54:49.597370 | orchestrator | 2025-06-22 11:54:49 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:54:49.597428 | orchestrator | 2025-06-22 11:54:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:54:52.642718 | orchestrator | 2025-06-22 11:54:52 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:54:52.644515 | orchestrator | 2025-06-22 11:54:52 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state STARTED 2025-06-22 11:54:52.645448 | orchestrator | 2025-06-22 11:54:52 | INFO  | Task 951e4307-35f8-49b1-8e0f-f3b6df05aa36 is in state STARTED 2025-06-22 11:54:52.646204 | orchestrator | 2025-06-22 11:54:52 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:54:52.646956 | orchestrator | 2025-06-22 11:54:52 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:54:52.648249 | orchestrator | 2025-06-22 11:54:52 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:54:52.649137 | orchestrator | 2025-06-22 11:54:52 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:54:52.649161 | orchestrator | 2025-06-22 11:54:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:54:55.702325 | orchestrator | 2025-06-22 11:54:55 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:54:55.707910 | orchestrator | 2025-06-22 11:54:55 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state STARTED 2025-06-22 11:54:55.707957 | orchestrator | 2025-06-22 11:54:55 | INFO  | Task 951e4307-35f8-49b1-8e0f-f3b6df05aa36 is in state STARTED 2025-06-22 11:54:55.713970 | orchestrator | 2025-06-22 11:54:55 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:54:55.713993 | orchestrator | 2025-06-22 11:54:55 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:54:55.719007 | orchestrator | 2025-06-22 11:54:55 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:54:55.719052 | orchestrator | 2025-06-22 11:54:55 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:54:55.719064 | orchestrator | 2025-06-22 11:54:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:54:58.754630 | orchestrator | 2025-06-22 11:54:58 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:54:58.756297 | orchestrator | 2025-06-22 11:54:58 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state STARTED 2025-06-22 11:54:58.757656 | orchestrator | 2025-06-22 11:54:58 | INFO  | Task 951e4307-35f8-49b1-8e0f-f3b6df05aa36 is in state SUCCESS 2025-06-22 11:54:58.759806 | orchestrator | 2025-06-22 11:54:58 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:54:58.762640 | orchestrator | 2025-06-22 11:54:58 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:54:58.765187 | orchestrator | 2025-06-22 11:54:58 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:54:58.765667 | orchestrator | 2025-06-22 11:54:58 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:54:58.765688 | orchestrator | 2025-06-22 11:54:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:01.805090 | orchestrator | 2025-06-22 11:55:01 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:01.806777 | orchestrator | 2025-06-22 11:55:01 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state STARTED 2025-06-22 11:55:01.808085 | orchestrator | 2025-06-22 11:55:01 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:55:01.811180 | orchestrator | 2025-06-22 11:55:01 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:01.812324 | orchestrator | 2025-06-22 11:55:01 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:01.813533 | orchestrator | 2025-06-22 11:55:01 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:55:01.813651 | orchestrator | 2025-06-22 11:55:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:04.861033 | orchestrator | 2025-06-22 11:55:04 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:04.863488 | orchestrator | 2025-06-22 11:55:04 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state STARTED 2025-06-22 11:55:04.864966 | orchestrator | 2025-06-22 11:55:04 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:55:04.867115 | orchestrator | 2025-06-22 11:55:04 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:04.869621 | orchestrator | 2025-06-22 11:55:04 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:04.873218 | orchestrator | 2025-06-22 11:55:04 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:55:04.873249 | orchestrator | 2025-06-22 11:55:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:07.920260 | orchestrator | 2025-06-22 11:55:07 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:07.921072 | orchestrator | 2025-06-22 11:55:07 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state STARTED 2025-06-22 11:55:07.926623 | orchestrator | 2025-06-22 11:55:07 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:55:07.928727 | orchestrator | 2025-06-22 11:55:07 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:07.929867 | orchestrator | 2025-06-22 11:55:07 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:07.930914 | orchestrator | 2025-06-22 11:55:07 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:55:07.930958 | orchestrator | 2025-06-22 11:55:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:10.974521 | orchestrator | 2025-06-22 11:55:10 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:10.975028 | orchestrator | 2025-06-22 11:55:10 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state STARTED 2025-06-22 11:55:10.977424 | orchestrator | 2025-06-22 11:55:10 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:55:10.978957 | orchestrator | 2025-06-22 11:55:10 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:10.979933 | orchestrator | 2025-06-22 11:55:10 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:10.982301 | orchestrator | 2025-06-22 11:55:10 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:55:10.983033 | orchestrator | 2025-06-22 11:55:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:14.040094 | orchestrator | 2025-06-22 11:55:14 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:14.040163 | orchestrator | 2025-06-22 11:55:14 | INFO  | Task c39a44a8-ba06-41ea-90ba-17cc086f3281 is in state SUCCESS 2025-06-22 11:55:14.040173 | orchestrator | 2025-06-22 11:55:14 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:55:14.042747 | orchestrator | 2025-06-22 11:55:14 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:14.042902 | orchestrator | 2025-06-22 11:55:14 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:14.043220 | orchestrator | 2025-06-22 11:55:14 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:55:14.043390 | orchestrator | 2025-06-22 11:55:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:17.079510 | orchestrator | 2025-06-22 11:55:17 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:17.080674 | orchestrator | 2025-06-22 11:55:17 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:55:17.080893 | orchestrator | 2025-06-22 11:55:17 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:17.081530 | orchestrator | 2025-06-22 11:55:17 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:17.082608 | orchestrator | 2025-06-22 11:55:17 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:55:17.082635 | orchestrator | 2025-06-22 11:55:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:20.118011 | orchestrator | 2025-06-22 11:55:20 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:20.120429 | orchestrator | 2025-06-22 11:55:20 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:55:20.121767 | orchestrator | 2025-06-22 11:55:20 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:20.125743 | orchestrator | 2025-06-22 11:55:20 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:20.125792 | orchestrator | 2025-06-22 11:55:20 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:55:20.125805 | orchestrator | 2025-06-22 11:55:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:23.174292 | orchestrator | 2025-06-22 11:55:23 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:23.174405 | orchestrator | 2025-06-22 11:55:23 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:55:23.175438 | orchestrator | 2025-06-22 11:55:23 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:23.179939 | orchestrator | 2025-06-22 11:55:23 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:23.180000 | orchestrator | 2025-06-22 11:55:23 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:55:23.180022 | orchestrator | 2025-06-22 11:55:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:26.249637 | orchestrator | 2025-06-22 11:55:26 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:26.252388 | orchestrator | 2025-06-22 11:55:26 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:55:26.255074 | orchestrator | 2025-06-22 11:55:26 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:26.261918 | orchestrator | 2025-06-22 11:55:26 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:26.263349 | orchestrator | 2025-06-22 11:55:26 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:55:26.263364 | orchestrator | 2025-06-22 11:55:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:29.335727 | orchestrator | 2025-06-22 11:55:29 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:29.339834 | orchestrator | 2025-06-22 11:55:29 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:55:29.349097 | orchestrator | 2025-06-22 11:55:29 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:29.349170 | orchestrator | 2025-06-22 11:55:29 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:29.355970 | orchestrator | 2025-06-22 11:55:29 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state STARTED 2025-06-22 11:55:29.356108 | orchestrator | 2025-06-22 11:55:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:32.413311 | orchestrator | 2025-06-22 11:55:32 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:32.414748 | orchestrator | 2025-06-22 11:55:32 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:55:32.416463 | orchestrator | 2025-06-22 11:55:32 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:32.417675 | orchestrator | 2025-06-22 11:55:32 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:32.420281 | orchestrator | 2025-06-22 11:55:32.420333 | orchestrator | 2025-06-22 11:55:32.420348 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-06-22 11:55:32.420361 | orchestrator | 2025-06-22 11:55:32.420373 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-06-22 11:55:32.420385 | orchestrator | Sunday 22 June 2025 11:54:22 +0000 (0:00:00.770) 0:00:00.770 *********** 2025-06-22 11:55:32.420397 | orchestrator | ok: [testbed-manager] => { 2025-06-22 11:55:32.420411 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-06-22 11:55:32.420424 | orchestrator | } 2025-06-22 11:55:32.420436 | orchestrator | 2025-06-22 11:55:32.420448 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-06-22 11:55:32.420459 | orchestrator | Sunday 22 June 2025 11:54:22 +0000 (0:00:00.594) 0:00:01.364 *********** 2025-06-22 11:55:32.420470 | orchestrator | ok: [testbed-manager] 2025-06-22 11:55:32.420483 | orchestrator | 2025-06-22 11:55:32.420494 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-06-22 11:55:32.420506 | orchestrator | Sunday 22 June 2025 11:54:24 +0000 (0:00:01.781) 0:00:03.145 *********** 2025-06-22 11:55:32.420538 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-06-22 11:55:32.420550 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-06-22 11:55:32.420584 | orchestrator | 2025-06-22 11:55:32.420604 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-06-22 11:55:32.420623 | orchestrator | Sunday 22 June 2025 11:54:26 +0000 (0:00:01.593) 0:00:04.739 *********** 2025-06-22 11:55:32.420641 | orchestrator | changed: [testbed-manager] 2025-06-22 11:55:32.420658 | orchestrator | 2025-06-22 11:55:32.420669 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-06-22 11:55:32.420680 | orchestrator | Sunday 22 June 2025 11:54:28 +0000 (0:00:02.134) 0:00:06.874 *********** 2025-06-22 11:55:32.420691 | orchestrator | changed: [testbed-manager] 2025-06-22 11:55:32.420701 | orchestrator | 2025-06-22 11:55:32.420718 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-06-22 11:55:32.420729 | orchestrator | Sunday 22 June 2025 11:54:29 +0000 (0:00:01.576) 0:00:08.450 *********** 2025-06-22 11:55:32.420740 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-06-22 11:55:32.420751 | orchestrator | ok: [testbed-manager] 2025-06-22 11:55:32.420762 | orchestrator | 2025-06-22 11:55:32.420772 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-06-22 11:55:32.420783 | orchestrator | Sunday 22 June 2025 11:54:54 +0000 (0:00:24.732) 0:00:33.182 *********** 2025-06-22 11:55:32.420794 | orchestrator | changed: [testbed-manager] 2025-06-22 11:55:32.420805 | orchestrator | 2025-06-22 11:55:32.420815 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:55:32.420827 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:55:32.420839 | orchestrator | 2025-06-22 11:55:32.420850 | orchestrator | 2025-06-22 11:55:32.420860 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:55:32.420871 | orchestrator | Sunday 22 June 2025 11:54:56 +0000 (0:00:01.657) 0:00:34.840 *********** 2025-06-22 11:55:32.420881 | orchestrator | =============================================================================== 2025-06-22 11:55:32.420892 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.73s 2025-06-22 11:55:32.420903 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.13s 2025-06-22 11:55:32.420913 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.78s 2025-06-22 11:55:32.420925 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.66s 2025-06-22 11:55:32.420935 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.59s 2025-06-22 11:55:32.420946 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.58s 2025-06-22 11:55:32.420957 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.59s 2025-06-22 11:55:32.420967 | orchestrator | 2025-06-22 11:55:32.420978 | orchestrator | 2025-06-22 11:55:32.420989 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-06-22 11:55:32.420999 | orchestrator | 2025-06-22 11:55:32.421010 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-06-22 11:55:32.421021 | orchestrator | Sunday 22 June 2025 11:54:23 +0000 (0:00:00.867) 0:00:00.867 *********** 2025-06-22 11:55:32.421032 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-06-22 11:55:32.421044 | orchestrator | 2025-06-22 11:55:32.421055 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-06-22 11:55:32.421065 | orchestrator | Sunday 22 June 2025 11:54:23 +0000 (0:00:00.580) 0:00:01.447 *********** 2025-06-22 11:55:32.421076 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-06-22 11:55:32.421087 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-06-22 11:55:32.421105 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-06-22 11:55:32.421116 | orchestrator | 2025-06-22 11:55:32.421127 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-06-22 11:55:32.421138 | orchestrator | Sunday 22 June 2025 11:54:25 +0000 (0:00:02.027) 0:00:03.475 *********** 2025-06-22 11:55:32.421149 | orchestrator | changed: [testbed-manager] 2025-06-22 11:55:32.421159 | orchestrator | 2025-06-22 11:55:32.421170 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-06-22 11:55:32.421181 | orchestrator | Sunday 22 June 2025 11:54:27 +0000 (0:00:01.739) 0:00:05.214 *********** 2025-06-22 11:55:32.421206 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-06-22 11:55:32.421217 | orchestrator | ok: [testbed-manager] 2025-06-22 11:55:32.421228 | orchestrator | 2025-06-22 11:55:32.421239 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-06-22 11:55:32.421249 | orchestrator | Sunday 22 June 2025 11:55:05 +0000 (0:00:38.232) 0:00:43.446 *********** 2025-06-22 11:55:32.421260 | orchestrator | changed: [testbed-manager] 2025-06-22 11:55:32.421271 | orchestrator | 2025-06-22 11:55:32.421281 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-06-22 11:55:32.421293 | orchestrator | Sunday 22 June 2025 11:55:06 +0000 (0:00:00.722) 0:00:44.169 *********** 2025-06-22 11:55:32.421304 | orchestrator | ok: [testbed-manager] 2025-06-22 11:55:32.421314 | orchestrator | 2025-06-22 11:55:32.421325 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-06-22 11:55:32.421336 | orchestrator | Sunday 22 June 2025 11:55:06 +0000 (0:00:00.591) 0:00:44.761 *********** 2025-06-22 11:55:32.421347 | orchestrator | changed: [testbed-manager] 2025-06-22 11:55:32.421357 | orchestrator | 2025-06-22 11:55:32.421368 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-06-22 11:55:32.421379 | orchestrator | Sunday 22 June 2025 11:55:08 +0000 (0:00:01.673) 0:00:46.434 *********** 2025-06-22 11:55:32.421389 | orchestrator | changed: [testbed-manager] 2025-06-22 11:55:32.421400 | orchestrator | 2025-06-22 11:55:32.421411 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-06-22 11:55:32.421422 | orchestrator | Sunday 22 June 2025 11:55:09 +0000 (0:00:01.006) 0:00:47.441 *********** 2025-06-22 11:55:32.421432 | orchestrator | changed: [testbed-manager] 2025-06-22 11:55:32.421443 | orchestrator | 2025-06-22 11:55:32.421454 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-06-22 11:55:32.421464 | orchestrator | Sunday 22 June 2025 11:55:10 +0000 (0:00:01.140) 0:00:48.581 *********** 2025-06-22 11:55:32.421475 | orchestrator | ok: [testbed-manager] 2025-06-22 11:55:32.421486 | orchestrator | 2025-06-22 11:55:32.421501 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:55:32.421512 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:55:32.421523 | orchestrator | 2025-06-22 11:55:32.421533 | orchestrator | 2025-06-22 11:55:32.421544 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:55:32.421554 | orchestrator | Sunday 22 June 2025 11:55:11 +0000 (0:00:00.418) 0:00:48.999 *********** 2025-06-22 11:55:32.421619 | orchestrator | =============================================================================== 2025-06-22 11:55:32.421632 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 38.23s 2025-06-22 11:55:32.421643 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.04s 2025-06-22 11:55:32.421654 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.73s 2025-06-22 11:55:32.421664 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.67s 2025-06-22 11:55:32.421675 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.14s 2025-06-22 11:55:32.421686 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.01s 2025-06-22 11:55:32.421703 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.72s 2025-06-22 11:55:32.421714 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.59s 2025-06-22 11:55:32.421724 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.58s 2025-06-22 11:55:32.421735 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.42s 2025-06-22 11:55:32.421746 | orchestrator | 2025-06-22 11:55:32.421756 | orchestrator | 2025-06-22 11:55:32.421767 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 11:55:32.421777 | orchestrator | 2025-06-22 11:55:32.421788 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 11:55:32.421799 | orchestrator | Sunday 22 June 2025 11:54:23 +0000 (0:00:00.612) 0:00:00.612 *********** 2025-06-22 11:55:32.421809 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-06-22 11:55:32.421820 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-06-22 11:55:32.421830 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-06-22 11:55:32.421841 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-06-22 11:55:32.421851 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-06-22 11:55:32.421862 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-06-22 11:55:32.421873 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-06-22 11:55:32.421883 | orchestrator | 2025-06-22 11:55:32.421894 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-06-22 11:55:32.421904 | orchestrator | 2025-06-22 11:55:32.421915 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-06-22 11:55:32.421926 | orchestrator | Sunday 22 June 2025 11:54:25 +0000 (0:00:02.187) 0:00:02.800 *********** 2025-06-22 11:55:32.421949 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 11:55:32.421963 | orchestrator | 2025-06-22 11:55:32.421974 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-06-22 11:55:32.421985 | orchestrator | Sunday 22 June 2025 11:54:28 +0000 (0:00:02.251) 0:00:05.052 *********** 2025-06-22 11:55:32.421996 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:55:32.422006 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:55:32.422072 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:55:32.422085 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:55:32.422095 | orchestrator | ok: [testbed-manager] 2025-06-22 11:55:32.422113 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:55:32.422124 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:55:32.422135 | orchestrator | 2025-06-22 11:55:32.422146 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-06-22 11:55:32.422157 | orchestrator | Sunday 22 June 2025 11:54:29 +0000 (0:00:01.908) 0:00:06.960 *********** 2025-06-22 11:55:32.422167 | orchestrator | ok: [testbed-manager] 2025-06-22 11:55:32.422178 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:55:32.422189 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:55:32.422199 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:55:32.422210 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:55:32.422221 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:55:32.422231 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:55:32.422242 | orchestrator | 2025-06-22 11:55:32.422253 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-06-22 11:55:32.422263 | orchestrator | Sunday 22 June 2025 11:54:34 +0000 (0:00:04.379) 0:00:11.340 *********** 2025-06-22 11:55:32.422274 | orchestrator | changed: [testbed-manager] 2025-06-22 11:55:32.422284 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:55:32.422295 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:55:32.422317 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:55:32.422328 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:55:32.422338 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:55:32.422349 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:55:32.422360 | orchestrator | 2025-06-22 11:55:32.422370 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-06-22 11:55:32.422381 | orchestrator | Sunday 22 June 2025 11:54:37 +0000 (0:00:03.435) 0:00:14.775 *********** 2025-06-22 11:55:32.422392 | orchestrator | changed: [testbed-manager] 2025-06-22 11:55:32.422402 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:55:32.422413 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:55:32.422423 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:55:32.422434 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:55:32.422444 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:55:32.422455 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:55:32.422465 | orchestrator | 2025-06-22 11:55:32.422476 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-06-22 11:55:32.422487 | orchestrator | Sunday 22 June 2025 11:54:47 +0000 (0:00:09.369) 0:00:24.144 *********** 2025-06-22 11:55:32.422497 | orchestrator | changed: [testbed-manager] 2025-06-22 11:55:32.422508 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:55:32.422549 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:55:32.422607 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:55:32.422620 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:55:32.422632 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:55:32.422642 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:55:32.422653 | orchestrator | 2025-06-22 11:55:32.422663 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-06-22 11:55:32.422674 | orchestrator | Sunday 22 June 2025 11:55:07 +0000 (0:00:19.975) 0:00:44.119 *********** 2025-06-22 11:55:32.422686 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 11:55:32.422698 | orchestrator | 2025-06-22 11:55:32.422709 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-06-22 11:55:32.422720 | orchestrator | Sunday 22 June 2025 11:55:08 +0000 (0:00:01.502) 0:00:45.622 *********** 2025-06-22 11:55:32.422731 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-06-22 11:55:32.422741 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-06-22 11:55:32.422752 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-06-22 11:55:32.422763 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-06-22 11:55:32.422773 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-06-22 11:55:32.422784 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-06-22 11:55:32.422794 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-06-22 11:55:32.422805 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-06-22 11:55:32.422816 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-06-22 11:55:32.422826 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-06-22 11:55:32.422837 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-06-22 11:55:32.422847 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-06-22 11:55:32.422858 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-06-22 11:55:32.422869 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-06-22 11:55:32.422879 | orchestrator | 2025-06-22 11:55:32.422890 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-06-22 11:55:32.422901 | orchestrator | Sunday 22 June 2025 11:55:15 +0000 (0:00:06.903) 0:00:52.525 *********** 2025-06-22 11:55:32.422912 | orchestrator | ok: [testbed-manager] 2025-06-22 11:55:32.422922 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:55:32.422941 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:55:32.422952 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:55:32.422963 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:55:32.422973 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:55:32.422984 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:55:32.422994 | orchestrator | 2025-06-22 11:55:32.423005 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-06-22 11:55:32.423016 | orchestrator | Sunday 22 June 2025 11:55:16 +0000 (0:00:01.099) 0:00:53.625 *********** 2025-06-22 11:55:32.423027 | orchestrator | changed: [testbed-manager] 2025-06-22 11:55:32.423037 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:55:32.423048 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:55:32.423059 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:55:32.423069 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:55:32.423080 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:55:32.423090 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:55:32.423101 | orchestrator | 2025-06-22 11:55:32.423112 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-06-22 11:55:32.423131 | orchestrator | Sunday 22 June 2025 11:55:18 +0000 (0:00:01.439) 0:00:55.065 *********** 2025-06-22 11:55:32.423142 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:55:32.423153 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:55:32.423163 | orchestrator | ok: [testbed-manager] 2025-06-22 11:55:32.423174 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:55:32.423184 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:55:32.423195 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:55:32.423205 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:55:32.423216 | orchestrator | 2025-06-22 11:55:32.423227 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-06-22 11:55:32.423238 | orchestrator | Sunday 22 June 2025 11:55:19 +0000 (0:00:01.486) 0:00:56.551 *********** 2025-06-22 11:55:32.423248 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:55:32.423259 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:55:32.423269 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:55:32.423280 | orchestrator | ok: [testbed-manager] 2025-06-22 11:55:32.423290 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:55:32.423301 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:55:32.423311 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:55:32.423322 | orchestrator | 2025-06-22 11:55:32.423333 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-06-22 11:55:32.423343 | orchestrator | Sunday 22 June 2025 11:55:21 +0000 (0:00:01.899) 0:00:58.451 *********** 2025-06-22 11:55:32.423354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-06-22 11:55:32.423367 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 11:55:32.423378 | orchestrator | 2025-06-22 11:55:32.423389 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-06-22 11:55:32.423399 | orchestrator | Sunday 22 June 2025 11:55:23 +0000 (0:00:01.824) 0:01:00.275 *********** 2025-06-22 11:55:32.423410 | orchestrator | changed: [testbed-manager] 2025-06-22 11:55:32.423424 | orchestrator | 2025-06-22 11:55:32.423449 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-06-22 11:55:32.423460 | orchestrator | Sunday 22 June 2025 11:55:25 +0000 (0:00:02.361) 0:01:02.636 *********** 2025-06-22 11:55:32.423471 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:55:32.423482 | orchestrator | changed: [testbed-manager] 2025-06-22 11:55:32.423492 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:55:32.423503 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:55:32.423513 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:55:32.423524 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:55:32.423535 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:55:32.423545 | orchestrator | 2025-06-22 11:55:32.423586 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:55:32.423598 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:55:32.423609 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:55:32.423620 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:55:32.423631 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:55:32.423642 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:55:32.423652 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:55:32.423663 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:55:32.423673 | orchestrator | 2025-06-22 11:55:32.423684 | orchestrator | 2025-06-22 11:55:32.423695 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:55:32.423706 | orchestrator | Sunday 22 June 2025 11:55:29 +0000 (0:00:03.634) 0:01:06.271 *********** 2025-06-22 11:55:32.423716 | orchestrator | =============================================================================== 2025-06-22 11:55:32.423727 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 19.98s 2025-06-22 11:55:32.423738 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.37s 2025-06-22 11:55:32.423748 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.90s 2025-06-22 11:55:32.423759 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.38s 2025-06-22 11:55:32.423769 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.63s 2025-06-22 11:55:32.423780 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.44s 2025-06-22 11:55:32.423790 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.36s 2025-06-22 11:55:32.423801 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.25s 2025-06-22 11:55:32.423812 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.19s 2025-06-22 11:55:32.423822 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.91s 2025-06-22 11:55:32.423833 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.90s 2025-06-22 11:55:32.423850 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.82s 2025-06-22 11:55:32.423861 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.50s 2025-06-22 11:55:32.423872 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.49s 2025-06-22 11:55:32.423882 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.44s 2025-06-22 11:55:32.423893 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.10s 2025-06-22 11:55:32.423904 | orchestrator | 2025-06-22 11:55:32 | INFO  | Task 51f4db6c-5c5b-4c8e-a83a-5e47d266575d is in state SUCCESS 2025-06-22 11:55:32.423915 | orchestrator | 2025-06-22 11:55:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:35.466312 | orchestrator | 2025-06-22 11:55:35 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:35.466794 | orchestrator | 2025-06-22 11:55:35 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:55:35.468506 | orchestrator | 2025-06-22 11:55:35 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:35.470213 | orchestrator | 2025-06-22 11:55:35 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:35.470245 | orchestrator | 2025-06-22 11:55:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:38.508971 | orchestrator | 2025-06-22 11:55:38 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:38.511277 | orchestrator | 2025-06-22 11:55:38 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:55:38.513088 | orchestrator | 2025-06-22 11:55:38 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:38.514590 | orchestrator | 2025-06-22 11:55:38 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:38.514714 | orchestrator | 2025-06-22 11:55:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:41.547714 | orchestrator | 2025-06-22 11:55:41 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:41.551527 | orchestrator | 2025-06-22 11:55:41 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:55:41.552822 | orchestrator | 2025-06-22 11:55:41 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:41.555043 | orchestrator | 2025-06-22 11:55:41 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:41.555179 | orchestrator | 2025-06-22 11:55:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:44.595932 | orchestrator | 2025-06-22 11:55:44 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:44.597472 | orchestrator | 2025-06-22 11:55:44 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:55:44.599256 | orchestrator | 2025-06-22 11:55:44 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:44.601070 | orchestrator | 2025-06-22 11:55:44 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:44.601206 | orchestrator | 2025-06-22 11:55:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:47.641021 | orchestrator | 2025-06-22 11:55:47 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:47.642490 | orchestrator | 2025-06-22 11:55:47 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:55:47.642522 | orchestrator | 2025-06-22 11:55:47 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:47.644470 | orchestrator | 2025-06-22 11:55:47 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:47.644637 | orchestrator | 2025-06-22 11:55:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:50.697503 | orchestrator | 2025-06-22 11:55:50 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:50.698209 | orchestrator | 2025-06-22 11:55:50 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:55:50.703195 | orchestrator | 2025-06-22 11:55:50 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:50.711970 | orchestrator | 2025-06-22 11:55:50 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:50.712058 | orchestrator | 2025-06-22 11:55:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:53.764653 | orchestrator | 2025-06-22 11:55:53 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:53.765481 | orchestrator | 2025-06-22 11:55:53 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state STARTED 2025-06-22 11:55:53.768485 | orchestrator | 2025-06-22 11:55:53 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:53.774967 | orchestrator | 2025-06-22 11:55:53 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:53.774998 | orchestrator | 2025-06-22 11:55:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:56.821693 | orchestrator | 2025-06-22 11:55:56 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:56.822537 | orchestrator | 2025-06-22 11:55:56 | INFO  | Task 804af8bb-818b-4069-b30f-0c28cb9e345c is in state SUCCESS 2025-06-22 11:55:56.825159 | orchestrator | 2025-06-22 11:55:56 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:56.827505 | orchestrator | 2025-06-22 11:55:56 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:56.827720 | orchestrator | 2025-06-22 11:55:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:55:59.890498 | orchestrator | 2025-06-22 11:55:59 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:55:59.891929 | orchestrator | 2025-06-22 11:55:59 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:55:59.892626 | orchestrator | 2025-06-22 11:55:59 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:55:59.892659 | orchestrator | 2025-06-22 11:55:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:56:02.931528 | orchestrator | 2025-06-22 11:56:02 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:56:02.933112 | orchestrator | 2025-06-22 11:56:02 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:56:02.934518 | orchestrator | 2025-06-22 11:56:02 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:56:02.934577 | orchestrator | 2025-06-22 11:56:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:56:05.972435 | orchestrator | 2025-06-22 11:56:05 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:56:05.973834 | orchestrator | 2025-06-22 11:56:05 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:56:05.974114 | orchestrator | 2025-06-22 11:56:05 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:56:05.974512 | orchestrator | 2025-06-22 11:56:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:56:09.035999 | orchestrator | 2025-06-22 11:56:09 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:56:09.036110 | orchestrator | 2025-06-22 11:56:09 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:56:09.038776 | orchestrator | 2025-06-22 11:56:09 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:56:09.039019 | orchestrator | 2025-06-22 11:56:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:56:12.079809 | orchestrator | 2025-06-22 11:56:12 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:56:12.082689 | orchestrator | 2025-06-22 11:56:12 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:56:12.087070 | orchestrator | 2025-06-22 11:56:12 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:56:12.087651 | orchestrator | 2025-06-22 11:56:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:56:15.135984 | orchestrator | 2025-06-22 11:56:15 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:56:15.138063 | orchestrator | 2025-06-22 11:56:15 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:56:15.140993 | orchestrator | 2025-06-22 11:56:15 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:56:15.141002 | orchestrator | 2025-06-22 11:56:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:56:18.197834 | orchestrator | 2025-06-22 11:56:18 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:56:18.200352 | orchestrator | 2025-06-22 11:56:18 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:56:18.201667 | orchestrator | 2025-06-22 11:56:18 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:56:18.201703 | orchestrator | 2025-06-22 11:56:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:56:21.250298 | orchestrator | 2025-06-22 11:56:21 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:56:21.253525 | orchestrator | 2025-06-22 11:56:21 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:56:21.254630 | orchestrator | 2025-06-22 11:56:21 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:56:21.254751 | orchestrator | 2025-06-22 11:56:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:56:24.313372 | orchestrator | 2025-06-22 11:56:24 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:56:24.315159 | orchestrator | 2025-06-22 11:56:24 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:56:24.317713 | orchestrator | 2025-06-22 11:56:24 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:56:24.317746 | orchestrator | 2025-06-22 11:56:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:56:27.362866 | orchestrator | 2025-06-22 11:56:27 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:56:27.363800 | orchestrator | 2025-06-22 11:56:27 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:56:27.366463 | orchestrator | 2025-06-22 11:56:27 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:56:27.366506 | orchestrator | 2025-06-22 11:56:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:56:30.414245 | orchestrator | 2025-06-22 11:56:30 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:56:30.416984 | orchestrator | 2025-06-22 11:56:30 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:56:30.418879 | orchestrator | 2025-06-22 11:56:30 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:56:30.418912 | orchestrator | 2025-06-22 11:56:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:56:33.469742 | orchestrator | 2025-06-22 11:56:33 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:56:33.472032 | orchestrator | 2025-06-22 11:56:33 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:56:33.475360 | orchestrator | 2025-06-22 11:56:33 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:56:33.475388 | orchestrator | 2025-06-22 11:56:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:56:36.523175 | orchestrator | 2025-06-22 11:56:36 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:56:36.524994 | orchestrator | 2025-06-22 11:56:36 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:56:36.526404 | orchestrator | 2025-06-22 11:56:36 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:56:36.526433 | orchestrator | 2025-06-22 11:56:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:56:39.577059 | orchestrator | 2025-06-22 11:56:39 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:56:39.579519 | orchestrator | 2025-06-22 11:56:39 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:56:39.583974 | orchestrator | 2025-06-22 11:56:39 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:56:39.584988 | orchestrator | 2025-06-22 11:56:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:56:42.638419 | orchestrator | 2025-06-22 11:56:42 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:56:42.638647 | orchestrator | 2025-06-22 11:56:42 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:56:42.639333 | orchestrator | 2025-06-22 11:56:42 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:56:42.639374 | orchestrator | 2025-06-22 11:56:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:56:45.693061 | orchestrator | 2025-06-22 11:56:45 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:56:45.695975 | orchestrator | 2025-06-22 11:56:45 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:56:45.699749 | orchestrator | 2025-06-22 11:56:45 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:56:45.699789 | orchestrator | 2025-06-22 11:56:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:56:48.756670 | orchestrator | 2025-06-22 11:56:48 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:56:48.756804 | orchestrator | 2025-06-22 11:56:48 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:56:48.758424 | orchestrator | 2025-06-22 11:56:48 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:56:48.758644 | orchestrator | 2025-06-22 11:56:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:56:51.829359 | orchestrator | 2025-06-22 11:56:51 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:56:51.829504 | orchestrator | 2025-06-22 11:56:51 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:56:51.833563 | orchestrator | 2025-06-22 11:56:51 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:56:51.833623 | orchestrator | 2025-06-22 11:56:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:56:54.880097 | orchestrator | 2025-06-22 11:56:54 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:56:54.881004 | orchestrator | 2025-06-22 11:56:54 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:56:54.885443 | orchestrator | 2025-06-22 11:56:54 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:56:54.885503 | orchestrator | 2025-06-22 11:56:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:56:57.926388 | orchestrator | 2025-06-22 11:56:57 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:56:57.927686 | orchestrator | 2025-06-22 11:56:57 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:56:57.929812 | orchestrator | 2025-06-22 11:56:57 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:56:57.930959 | orchestrator | 2025-06-22 11:56:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:00.990822 | orchestrator | 2025-06-22 11:57:00 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:00.991267 | orchestrator | 2025-06-22 11:57:00 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:00.993106 | orchestrator | 2025-06-22 11:57:00 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:57:00.993129 | orchestrator | 2025-06-22 11:57:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:04.048963 | orchestrator | 2025-06-22 11:57:04 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:04.049073 | orchestrator | 2025-06-22 11:57:04 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:04.050791 | orchestrator | 2025-06-22 11:57:04 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:57:04.050818 | orchestrator | 2025-06-22 11:57:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:07.094103 | orchestrator | 2025-06-22 11:57:07 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:07.095115 | orchestrator | 2025-06-22 11:57:07 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:07.098779 | orchestrator | 2025-06-22 11:57:07 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:57:07.098928 | orchestrator | 2025-06-22 11:57:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:10.149235 | orchestrator | 2025-06-22 11:57:10 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:10.150827 | orchestrator | 2025-06-22 11:57:10 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:10.151198 | orchestrator | 2025-06-22 11:57:10 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:57:10.151396 | orchestrator | 2025-06-22 11:57:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:13.202147 | orchestrator | 2025-06-22 11:57:13 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:13.202504 | orchestrator | 2025-06-22 11:57:13 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:13.203976 | orchestrator | 2025-06-22 11:57:13 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:57:13.204155 | orchestrator | 2025-06-22 11:57:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:16.255216 | orchestrator | 2025-06-22 11:57:16 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:16.255825 | orchestrator | 2025-06-22 11:57:16 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:16.256702 | orchestrator | 2025-06-22 11:57:16 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:57:16.256732 | orchestrator | 2025-06-22 11:57:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:19.311301 | orchestrator | 2025-06-22 11:57:19 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:19.314803 | orchestrator | 2025-06-22 11:57:19 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:19.318149 | orchestrator | 2025-06-22 11:57:19 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state STARTED 2025-06-22 11:57:19.324223 | orchestrator | 2025-06-22 11:57:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:22.374560 | orchestrator | 2025-06-22 11:57:22 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:22.379318 | orchestrator | 2025-06-22 11:57:22 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:22.383132 | orchestrator | 2025-06-22 11:57:22 | INFO  | Task 56761d45-f1bc-4f6e-a701-305518b905e9 is in state SUCCESS 2025-06-22 11:57:22.385038 | orchestrator | 2025-06-22 11:57:22.385077 | orchestrator | 2025-06-22 11:57:22.385089 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-06-22 11:57:22.385101 | orchestrator | 2025-06-22 11:57:22.385112 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-06-22 11:57:22.385123 | orchestrator | Sunday 22 June 2025 11:54:44 +0000 (0:00:00.298) 0:00:00.298 *********** 2025-06-22 11:57:22.385135 | orchestrator | ok: [testbed-manager] 2025-06-22 11:57:22.385147 | orchestrator | 2025-06-22 11:57:22.385158 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-06-22 11:57:22.385169 | orchestrator | Sunday 22 June 2025 11:54:45 +0000 (0:00:00.987) 0:00:01.285 *********** 2025-06-22 11:57:22.385181 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-06-22 11:57:22.385192 | orchestrator | 2025-06-22 11:57:22.385203 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-06-22 11:57:22.385214 | orchestrator | Sunday 22 June 2025 11:54:45 +0000 (0:00:00.595) 0:00:01.880 *********** 2025-06-22 11:57:22.385225 | orchestrator | changed: [testbed-manager] 2025-06-22 11:57:22.385236 | orchestrator | 2025-06-22 11:57:22.385246 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-06-22 11:57:22.385257 | orchestrator | Sunday 22 June 2025 11:54:47 +0000 (0:00:01.507) 0:00:03.387 *********** 2025-06-22 11:57:22.385268 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-06-22 11:57:22.385279 | orchestrator | ok: [testbed-manager] 2025-06-22 11:57:22.385290 | orchestrator | 2025-06-22 11:57:22.385302 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-06-22 11:57:22.385312 | orchestrator | Sunday 22 June 2025 11:55:48 +0000 (0:01:01.223) 0:01:04.611 *********** 2025-06-22 11:57:22.385323 | orchestrator | changed: [testbed-manager] 2025-06-22 11:57:22.385334 | orchestrator | 2025-06-22 11:57:22.385345 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:57:22.385356 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:57:22.385368 | orchestrator | 2025-06-22 11:57:22.385379 | orchestrator | 2025-06-22 11:57:22.385390 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:57:22.385401 | orchestrator | Sunday 22 June 2025 11:55:53 +0000 (0:00:04.713) 0:01:09.324 *********** 2025-06-22 11:57:22.385412 | orchestrator | =============================================================================== 2025-06-22 11:57:22.385431 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 61.22s 2025-06-22 11:57:22.385442 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.71s 2025-06-22 11:57:22.385453 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.51s 2025-06-22 11:57:22.385464 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.99s 2025-06-22 11:57:22.385475 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.60s 2025-06-22 11:57:22.385486 | orchestrator | 2025-06-22 11:57:22.385497 | orchestrator | 2025-06-22 11:57:22.385576 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-06-22 11:57:22.385613 | orchestrator | 2025-06-22 11:57:22.385625 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-22 11:57:22.385638 | orchestrator | Sunday 22 June 2025 11:54:15 +0000 (0:00:00.221) 0:00:00.221 *********** 2025-06-22 11:57:22.385651 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 11:57:22.385664 | orchestrator | 2025-06-22 11:57:22.385677 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-06-22 11:57:22.385689 | orchestrator | Sunday 22 June 2025 11:54:16 +0000 (0:00:01.141) 0:00:01.363 *********** 2025-06-22 11:57:22.385702 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 11:57:22.385714 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 11:57:22.385726 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 11:57:22.385738 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 11:57:22.385750 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 11:57:22.385764 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 11:57:22.385776 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 11:57:22.385788 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 11:57:22.385800 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 11:57:22.385812 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 11:57:22.385824 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 11:57:22.385836 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 11:57:22.385849 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 11:57:22.385861 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 11:57:22.385873 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 11:57:22.385891 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 11:57:22.385917 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 11:57:22.385930 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 11:57:22.385942 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 11:57:22.385954 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 11:57:22.385966 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 11:57:22.385979 | orchestrator | 2025-06-22 11:57:22.385991 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-22 11:57:22.386003 | orchestrator | Sunday 22 June 2025 11:54:20 +0000 (0:00:04.671) 0:00:06.034 *********** 2025-06-22 11:57:22.386014 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 11:57:22.386125 | orchestrator | 2025-06-22 11:57:22.386136 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-06-22 11:57:22.386147 | orchestrator | Sunday 22 June 2025 11:54:22 +0000 (0:00:01.613) 0:00:07.648 *********** 2025-06-22 11:57:22.386161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.386186 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.386199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.386211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.386223 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.386258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.386270 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.386282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.386300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.386312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.386328 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.386341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.386361 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.386380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.386392 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.386411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.386423 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.386434 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.386446 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.386457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.386469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.386480 | orchestrator | 2025-06-22 11:57:22.386496 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-06-22 11:57:22.386533 | orchestrator | Sunday 22 June 2025 11:54:28 +0000 (0:00:05.677) 0:00:13.326 *********** 2025-06-22 11:57:22.386547 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 11:57:22.386565 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.386577 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.386589 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:57:22.386601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 11:57:22.386612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.386624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.386635 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:57:22.386647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 11:57:22.386677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.386695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 11:57:22.386707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.386718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.386729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.386740 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:57:22.386751 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:57:22.386762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 11:57:22.386774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.386795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.386812 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:57:22.386824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 11:57:22.386835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.386847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.386858 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:57:22.386869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 11:57:22.386880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.386892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.386903 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:57:22.386914 | orchestrator | 2025-06-22 11:57:22.386925 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-06-22 11:57:22.386936 | orchestrator | Sunday 22 June 2025 11:54:29 +0000 (0:00:01.477) 0:00:14.803 *********** 2025-06-22 11:57:22.386947 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 11:57:22.386972 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.386984 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.386995 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:57:22.387006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 11:57:22.387018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.387034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.387046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 11:57:22.387058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.387092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.387104 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:57:22.387115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 11:57:22.387126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.387138 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:57:22.387149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.387161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 11:57:22.387172 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:57:22.387183 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.387200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.387212 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:57:22.387227 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 11:57:22.387246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.387258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.387269 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:57:22.387280 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 11:57:22.387291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.387303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.387314 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:57:22.387325 | orchestrator | 2025-06-22 11:57:22.387335 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-06-22 11:57:22.387353 | orchestrator | Sunday 22 June 2025 11:54:32 +0000 (0:00:03.039) 0:00:17.843 *********** 2025-06-22 11:57:22.387364 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:57:22.387374 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:57:22.387385 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:57:22.387395 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:57:22.387406 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:57:22.387417 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:57:22.387427 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:57:22.387438 | orchestrator | 2025-06-22 11:57:22.387449 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-06-22 11:57:22.387460 | orchestrator | Sunday 22 June 2025 11:54:33 +0000 (0:00:00.863) 0:00:18.706 *********** 2025-06-22 11:57:22.387470 | orchestrator | skipping: [testbed-manager] 2025-06-22 11:57:22.387481 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:57:22.387492 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:57:22.387502 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:57:22.387529 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:57:22.387540 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:57:22.387551 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:57:22.387561 | orchestrator | 2025-06-22 11:57:22.387572 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-06-22 11:57:22.387583 | orchestrator | Sunday 22 June 2025 11:54:34 +0000 (0:00:01.271) 0:00:19.977 *********** 2025-06-22 11:57:22.387608 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.387620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.387632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.387644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.387655 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.387672 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.387684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.387700 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.387719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.387731 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.387743 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.387754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.387773 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.387784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.387796 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.387817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.387829 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.387841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.387853 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.387871 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.387882 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.387893 | orchestrator | 2025-06-22 11:57:22.387904 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-06-22 11:57:22.387915 | orchestrator | Sunday 22 June 2025 11:54:40 +0000 (0:00:05.394) 0:00:25.372 *********** 2025-06-22 11:57:22.387926 | orchestrator | [WARNING]: Skipped 2025-06-22 11:57:22.387937 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-06-22 11:57:22.387948 | orchestrator | to this access issue: 2025-06-22 11:57:22.387959 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-06-22 11:57:22.387969 | orchestrator | directory 2025-06-22 11:57:22.387980 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 11:57:22.387991 | orchestrator | 2025-06-22 11:57:22.388001 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-06-22 11:57:22.388012 | orchestrator | Sunday 22 June 2025 11:54:42 +0000 (0:00:02.143) 0:00:27.515 *********** 2025-06-22 11:57:22.388023 | orchestrator | [WARNING]: Skipped 2025-06-22 11:57:22.388033 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-06-22 11:57:22.388044 | orchestrator | to this access issue: 2025-06-22 11:57:22.388055 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-06-22 11:57:22.388065 | orchestrator | directory 2025-06-22 11:57:22.388076 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 11:57:22.388087 | orchestrator | 2025-06-22 11:57:22.388097 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-06-22 11:57:22.388108 | orchestrator | Sunday 22 June 2025 11:54:43 +0000 (0:00:01.028) 0:00:28.544 *********** 2025-06-22 11:57:22.388119 | orchestrator | [WARNING]: Skipped 2025-06-22 11:57:22.388130 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-06-22 11:57:22.388140 | orchestrator | to this access issue: 2025-06-22 11:57:22.388151 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-06-22 11:57:22.388162 | orchestrator | directory 2025-06-22 11:57:22.388177 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 11:57:22.388188 | orchestrator | 2025-06-22 11:57:22.388204 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-06-22 11:57:22.388215 | orchestrator | Sunday 22 June 2025 11:54:44 +0000 (0:00:00.802) 0:00:29.346 *********** 2025-06-22 11:57:22.388226 | orchestrator | [WARNING]: Skipped 2025-06-22 11:57:22.388237 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-06-22 11:57:22.388248 | orchestrator | to this access issue: 2025-06-22 11:57:22.388258 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-06-22 11:57:22.388269 | orchestrator | directory 2025-06-22 11:57:22.388280 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 11:57:22.388291 | orchestrator | 2025-06-22 11:57:22.388307 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-06-22 11:57:22.388318 | orchestrator | Sunday 22 June 2025 11:54:45 +0000 (0:00:00.925) 0:00:30.272 *********** 2025-06-22 11:57:22.388329 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:57:22.388340 | orchestrator | changed: [testbed-manager] 2025-06-22 11:57:22.388350 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:57:22.388361 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:57:22.388371 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:57:22.388382 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:57:22.388392 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:57:22.388403 | orchestrator | 2025-06-22 11:57:22.388414 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-06-22 11:57:22.388425 | orchestrator | Sunday 22 June 2025 11:54:50 +0000 (0:00:05.282) 0:00:35.555 *********** 2025-06-22 11:57:22.388436 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 11:57:22.388447 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 11:57:22.388457 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 11:57:22.388468 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 11:57:22.388479 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 11:57:22.388489 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 11:57:22.388500 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 11:57:22.388537 | orchestrator | 2025-06-22 11:57:22.388548 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-06-22 11:57:22.388559 | orchestrator | Sunday 22 June 2025 11:54:53 +0000 (0:00:03.089) 0:00:38.644 *********** 2025-06-22 11:57:22.388569 | orchestrator | changed: [testbed-manager] 2025-06-22 11:57:22.388580 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:57:22.388591 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:57:22.388602 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:57:22.388612 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:57:22.388623 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:57:22.388633 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:57:22.388644 | orchestrator | 2025-06-22 11:57:22.388655 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-06-22 11:57:22.388666 | orchestrator | Sunday 22 June 2025 11:54:56 +0000 (0:00:03.202) 0:00:41.847 *********** 2025-06-22 11:57:22.388677 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.388689 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.388705 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.388731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.388743 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.388766 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.388777 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.388788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.388800 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.388811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.388838 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.388859 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.388878 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.388896 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.388914 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.388932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.388957 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.388983 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.389019 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 11:57:22.389038 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.389055 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.389073 | orchestrator | 2025-06-22 11:57:22.389091 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-06-22 11:57:22.389109 | orchestrator | Sunday 22 June 2025 11:54:59 +0000 (0:00:02.387) 0:00:44.234 *********** 2025-06-22 11:57:22.389127 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 11:57:22.389145 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 11:57:22.389163 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 11:57:22.389182 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 11:57:22.389202 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 11:57:22.389220 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 11:57:22.389238 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 11:57:22.389251 | orchestrator | 2025-06-22 11:57:22.389262 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-06-22 11:57:22.389273 | orchestrator | Sunday 22 June 2025 11:55:01 +0000 (0:00:02.675) 0:00:46.910 *********** 2025-06-22 11:57:22.389284 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 11:57:22.389295 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 11:57:22.389305 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 11:57:22.389325 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 11:57:22.389335 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 11:57:22.389346 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 11:57:22.389357 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 11:57:22.389368 | orchestrator | 2025-06-22 11:57:22.389378 | orchestrator | TASK [common : Check common containers] **************************************** 2025-06-22 11:57:22.389389 | orchestrator | Sunday 22 June 2025 11:55:03 +0000 (0:00:01.911) 0:00:48.821 *********** 2025-06-22 11:57:22.389400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.389421 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.389442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.389454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.389465 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.389476 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.389494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.389505 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.389546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.389565 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 11:57:22.389577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.389588 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.389599 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.389617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.389629 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.389640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.389663 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.389675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.389686 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.389698 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.389709 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 11:57:22.389726 | orchestrator | 2025-06-22 11:57:22.389737 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-06-22 11:57:22.389748 | orchestrator | Sunday 22 June 2025 11:55:06 +0000 (0:00:02.920) 0:00:51.742 *********** 2025-06-22 11:57:22.389759 | orchestrator | changed: [testbed-manager] 2025-06-22 11:57:22.389770 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:57:22.389781 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:57:22.389791 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:57:22.389802 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:57:22.389813 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:57:22.389824 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:57:22.389834 | orchestrator | 2025-06-22 11:57:22.389845 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-06-22 11:57:22.389856 | orchestrator | Sunday 22 June 2025 11:55:08 +0000 (0:00:01.588) 0:00:53.331 *********** 2025-06-22 11:57:22.389867 | orchestrator | changed: [testbed-manager] 2025-06-22 11:57:22.389877 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:57:22.389888 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:57:22.389899 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:57:22.389909 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:57:22.389920 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:57:22.389930 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:57:22.389941 | orchestrator | 2025-06-22 11:57:22.389952 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 11:57:22.389963 | orchestrator | Sunday 22 June 2025 11:55:10 +0000 (0:00:01.797) 0:00:55.128 *********** 2025-06-22 11:57:22.389974 | orchestrator | 2025-06-22 11:57:22.389984 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 11:57:22.389996 | orchestrator | Sunday 22 June 2025 11:55:10 +0000 (0:00:00.233) 0:00:55.361 *********** 2025-06-22 11:57:22.390006 | orchestrator | 2025-06-22 11:57:22.390050 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 11:57:22.390063 | orchestrator | Sunday 22 June 2025 11:55:10 +0000 (0:00:00.060) 0:00:55.422 *********** 2025-06-22 11:57:22.390074 | orchestrator | 2025-06-22 11:57:22.390085 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 11:57:22.390096 | orchestrator | Sunday 22 June 2025 11:55:10 +0000 (0:00:00.047) 0:00:55.469 *********** 2025-06-22 11:57:22.390106 | orchestrator | 2025-06-22 11:57:22.390117 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 11:57:22.390127 | orchestrator | Sunday 22 June 2025 11:55:10 +0000 (0:00:00.057) 0:00:55.527 *********** 2025-06-22 11:57:22.390138 | orchestrator | 2025-06-22 11:57:22.390149 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 11:57:22.390160 | orchestrator | Sunday 22 June 2025 11:55:10 +0000 (0:00:00.077) 0:00:55.605 *********** 2025-06-22 11:57:22.390170 | orchestrator | 2025-06-22 11:57:22.390181 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 11:57:22.390191 | orchestrator | Sunday 22 June 2025 11:55:10 +0000 (0:00:00.079) 0:00:55.684 *********** 2025-06-22 11:57:22.390202 | orchestrator | 2025-06-22 11:57:22.390213 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-06-22 11:57:22.390228 | orchestrator | Sunday 22 June 2025 11:55:10 +0000 (0:00:00.106) 0:00:55.790 *********** 2025-06-22 11:57:22.390246 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:57:22.390257 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:57:22.390268 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:57:22.390279 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:57:22.390289 | orchestrator | changed: [testbed-manager] 2025-06-22 11:57:22.390300 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:57:22.390310 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:57:22.390327 | orchestrator | 2025-06-22 11:57:22.390338 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-06-22 11:57:22.390349 | orchestrator | Sunday 22 June 2025 11:55:57 +0000 (0:00:46.612) 0:01:42.403 *********** 2025-06-22 11:57:22.390360 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:57:22.390371 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:57:22.390381 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:57:22.390392 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:57:22.390403 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:57:22.390413 | orchestrator | changed: [testbed-manager] 2025-06-22 11:57:22.390424 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:57:22.390435 | orchestrator | 2025-06-22 11:57:22.390445 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-06-22 11:57:22.390456 | orchestrator | Sunday 22 June 2025 11:57:09 +0000 (0:01:11.864) 0:02:54.267 *********** 2025-06-22 11:57:22.390467 | orchestrator | ok: [testbed-manager] 2025-06-22 11:57:22.390478 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:57:22.390489 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:57:22.390499 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:57:22.390558 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:57:22.390570 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:57:22.390581 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:57:22.390592 | orchestrator | 2025-06-22 11:57:22.390603 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-06-22 11:57:22.390614 | orchestrator | Sunday 22 June 2025 11:57:11 +0000 (0:00:02.180) 0:02:56.448 *********** 2025-06-22 11:57:22.390624 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:57:22.390635 | orchestrator | changed: [testbed-manager] 2025-06-22 11:57:22.390646 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:57:22.390656 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:57:22.390667 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:57:22.390677 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:57:22.390687 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:57:22.390696 | orchestrator | 2025-06-22 11:57:22.390706 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:57:22.390716 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 11:57:22.390726 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 11:57:22.390736 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 11:57:22.390746 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 11:57:22.390755 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 11:57:22.390765 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 11:57:22.390774 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 11:57:22.390784 | orchestrator | 2025-06-22 11:57:22.390793 | orchestrator | 2025-06-22 11:57:22.390803 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:57:22.390812 | orchestrator | Sunday 22 June 2025 11:57:21 +0000 (0:00:09.887) 0:03:06.335 *********** 2025-06-22 11:57:22.390822 | orchestrator | =============================================================================== 2025-06-22 11:57:22.390832 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 71.86s 2025-06-22 11:57:22.390847 | orchestrator | common : Restart fluentd container ------------------------------------- 46.61s 2025-06-22 11:57:22.390857 | orchestrator | common : Restart cron container ----------------------------------------- 9.89s 2025-06-22 11:57:22.390866 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.68s 2025-06-22 11:57:22.390876 | orchestrator | common : Copying over config.json files for services -------------------- 5.39s 2025-06-22 11:57:22.390885 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.28s 2025-06-22 11:57:22.390895 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.67s 2025-06-22 11:57:22.390905 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.20s 2025-06-22 11:57:22.390914 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.09s 2025-06-22 11:57:22.390924 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.04s 2025-06-22 11:57:22.390933 | orchestrator | common : Check common containers ---------------------------------------- 2.92s 2025-06-22 11:57:22.390942 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.68s 2025-06-22 11:57:22.390952 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.39s 2025-06-22 11:57:22.390966 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.18s 2025-06-22 11:57:22.390981 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.14s 2025-06-22 11:57:22.390991 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.91s 2025-06-22 11:57:22.391000 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.80s 2025-06-22 11:57:22.391010 | orchestrator | common : include_tasks -------------------------------------------------- 1.61s 2025-06-22 11:57:22.391019 | orchestrator | common : Creating log volume -------------------------------------------- 1.59s 2025-06-22 11:57:22.391029 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.48s 2025-06-22 11:57:22.391038 | orchestrator | 2025-06-22 11:57:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:25.430289 | orchestrator | 2025-06-22 11:57:25 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:25.430735 | orchestrator | 2025-06-22 11:57:25 | INFO  | Task 91986091-0658-4797-9b87-ff6e932303e5 is in state STARTED 2025-06-22 11:57:25.431482 | orchestrator | 2025-06-22 11:57:25 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:57:25.432699 | orchestrator | 2025-06-22 11:57:25 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:25.439356 | orchestrator | 2025-06-22 11:57:25 | INFO  | Task 5beed22d-f156-4068-9209-98936038ba15 is in state STARTED 2025-06-22 11:57:25.442595 | orchestrator | 2025-06-22 11:57:25 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:57:25.442640 | orchestrator | 2025-06-22 11:57:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:28.477110 | orchestrator | 2025-06-22 11:57:28 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:28.478163 | orchestrator | 2025-06-22 11:57:28 | INFO  | Task 91986091-0658-4797-9b87-ff6e932303e5 is in state STARTED 2025-06-22 11:57:28.478872 | orchestrator | 2025-06-22 11:57:28 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:57:28.479571 | orchestrator | 2025-06-22 11:57:28 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:28.482174 | orchestrator | 2025-06-22 11:57:28 | INFO  | Task 5beed22d-f156-4068-9209-98936038ba15 is in state STARTED 2025-06-22 11:57:28.482857 | orchestrator | 2025-06-22 11:57:28 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:57:28.483101 | orchestrator | 2025-06-22 11:57:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:31.518368 | orchestrator | 2025-06-22 11:57:31 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:31.518450 | orchestrator | 2025-06-22 11:57:31 | INFO  | Task 91986091-0658-4797-9b87-ff6e932303e5 is in state STARTED 2025-06-22 11:57:31.518921 | orchestrator | 2025-06-22 11:57:31 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:57:31.519712 | orchestrator | 2025-06-22 11:57:31 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:31.520624 | orchestrator | 2025-06-22 11:57:31 | INFO  | Task 5beed22d-f156-4068-9209-98936038ba15 is in state STARTED 2025-06-22 11:57:31.521595 | orchestrator | 2025-06-22 11:57:31 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:57:31.521756 | orchestrator | 2025-06-22 11:57:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:34.555469 | orchestrator | 2025-06-22 11:57:34 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:34.559025 | orchestrator | 2025-06-22 11:57:34 | INFO  | Task 91986091-0658-4797-9b87-ff6e932303e5 is in state STARTED 2025-06-22 11:57:34.559765 | orchestrator | 2025-06-22 11:57:34 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:57:34.560928 | orchestrator | 2025-06-22 11:57:34 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:34.561932 | orchestrator | 2025-06-22 11:57:34 | INFO  | Task 5beed22d-f156-4068-9209-98936038ba15 is in state STARTED 2025-06-22 11:57:34.563148 | orchestrator | 2025-06-22 11:57:34 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:57:34.563225 | orchestrator | 2025-06-22 11:57:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:37.607702 | orchestrator | 2025-06-22 11:57:37 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:37.608935 | orchestrator | 2025-06-22 11:57:37 | INFO  | Task 91986091-0658-4797-9b87-ff6e932303e5 is in state STARTED 2025-06-22 11:57:37.610110 | orchestrator | 2025-06-22 11:57:37 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:57:37.611242 | orchestrator | 2025-06-22 11:57:37 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:37.611914 | orchestrator | 2025-06-22 11:57:37 | INFO  | Task 5beed22d-f156-4068-9209-98936038ba15 is in state SUCCESS 2025-06-22 11:57:37.615700 | orchestrator | 2025-06-22 11:57:37 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:57:37.615741 | orchestrator | 2025-06-22 11:57:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:40.644852 | orchestrator | 2025-06-22 11:57:40 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:40.645202 | orchestrator | 2025-06-22 11:57:40 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:57:40.645809 | orchestrator | 2025-06-22 11:57:40 | INFO  | Task 91986091-0658-4797-9b87-ff6e932303e5 is in state STARTED 2025-06-22 11:57:40.646615 | orchestrator | 2025-06-22 11:57:40 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:57:40.647346 | orchestrator | 2025-06-22 11:57:40 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:40.648048 | orchestrator | 2025-06-22 11:57:40 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:57:40.649068 | orchestrator | 2025-06-22 11:57:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:43.682646 | orchestrator | 2025-06-22 11:57:43 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:43.682845 | orchestrator | 2025-06-22 11:57:43 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:57:43.684575 | orchestrator | 2025-06-22 11:57:43 | INFO  | Task 91986091-0658-4797-9b87-ff6e932303e5 is in state STARTED 2025-06-22 11:57:43.686145 | orchestrator | 2025-06-22 11:57:43 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:57:43.688168 | orchestrator | 2025-06-22 11:57:43 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:43.688888 | orchestrator | 2025-06-22 11:57:43 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:57:43.688967 | orchestrator | 2025-06-22 11:57:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:46.740578 | orchestrator | 2025-06-22 11:57:46 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:46.740682 | orchestrator | 2025-06-22 11:57:46 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:57:46.741022 | orchestrator | 2025-06-22 11:57:46 | INFO  | Task 91986091-0658-4797-9b87-ff6e932303e5 is in state STARTED 2025-06-22 11:57:46.743894 | orchestrator | 2025-06-22 11:57:46 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:57:46.744381 | orchestrator | 2025-06-22 11:57:46 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:46.744938 | orchestrator | 2025-06-22 11:57:46 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:57:46.744973 | orchestrator | 2025-06-22 11:57:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:49.797760 | orchestrator | 2025-06-22 11:57:49 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:49.801672 | orchestrator | 2025-06-22 11:57:49 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:57:49.803033 | orchestrator | 2025-06-22 11:57:49 | INFO  | Task 91986091-0658-4797-9b87-ff6e932303e5 is in state STARTED 2025-06-22 11:57:49.804684 | orchestrator | 2025-06-22 11:57:49 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:57:49.806070 | orchestrator | 2025-06-22 11:57:49 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:49.808999 | orchestrator | 2025-06-22 11:57:49 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:57:49.809041 | orchestrator | 2025-06-22 11:57:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:52.848224 | orchestrator | 2025-06-22 11:57:52 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:52.853430 | orchestrator | 2025-06-22 11:57:52 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:57:52.862434 | orchestrator | 2025-06-22 11:57:52 | INFO  | Task 91986091-0658-4797-9b87-ff6e932303e5 is in state STARTED 2025-06-22 11:57:52.871564 | orchestrator | 2025-06-22 11:57:52 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:57:52.883055 | orchestrator | 2025-06-22 11:57:52 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:52.883127 | orchestrator | 2025-06-22 11:57:52 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:57:52.883170 | orchestrator | 2025-06-22 11:57:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:55.918979 | orchestrator | 2025-06-22 11:57:55 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:55.919553 | orchestrator | 2025-06-22 11:57:55 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:57:55.920439 | orchestrator | 2025-06-22 11:57:55 | INFO  | Task 91986091-0658-4797-9b87-ff6e932303e5 is in state SUCCESS 2025-06-22 11:57:55.922572 | orchestrator | 2025-06-22 11:57:55.922612 | orchestrator | 2025-06-22 11:57:55.922625 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 11:57:55.922637 | orchestrator | 2025-06-22 11:57:55.922648 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 11:57:55.922659 | orchestrator | Sunday 22 June 2025 11:57:26 +0000 (0:00:00.580) 0:00:00.580 *********** 2025-06-22 11:57:55.922670 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:57:55.922682 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:57:55.922693 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:57:55.922704 | orchestrator | 2025-06-22 11:57:55.922715 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 11:57:55.922726 | orchestrator | Sunday 22 June 2025 11:57:27 +0000 (0:00:00.556) 0:00:01.137 *********** 2025-06-22 11:57:55.922737 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-06-22 11:57:55.922748 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-06-22 11:57:55.922758 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-06-22 11:57:55.922769 | orchestrator | 2025-06-22 11:57:55.922780 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-06-22 11:57:55.922790 | orchestrator | 2025-06-22 11:57:55.922801 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-06-22 11:57:55.922812 | orchestrator | Sunday 22 June 2025 11:57:28 +0000 (0:00:00.682) 0:00:01.819 *********** 2025-06-22 11:57:55.922823 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:57:55.922834 | orchestrator | 2025-06-22 11:57:55.922845 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-06-22 11:57:55.922855 | orchestrator | Sunday 22 June 2025 11:57:28 +0000 (0:00:00.746) 0:00:02.566 *********** 2025-06-22 11:57:55.922866 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-22 11:57:55.922877 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-22 11:57:55.922888 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-22 11:57:55.922898 | orchestrator | 2025-06-22 11:57:55.922909 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-06-22 11:57:55.922919 | orchestrator | Sunday 22 June 2025 11:57:29 +0000 (0:00:00.776) 0:00:03.343 *********** 2025-06-22 11:57:55.922930 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-22 11:57:55.922942 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-22 11:57:55.922952 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-22 11:57:55.922963 | orchestrator | 2025-06-22 11:57:55.922974 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-06-22 11:57:55.922984 | orchestrator | Sunday 22 June 2025 11:57:31 +0000 (0:00:01.918) 0:00:05.262 *********** 2025-06-22 11:57:55.922995 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:57:55.923006 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:57:55.923051 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:57:55.923063 | orchestrator | 2025-06-22 11:57:55.923074 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-06-22 11:57:55.923085 | orchestrator | Sunday 22 June 2025 11:57:33 +0000 (0:00:01.769) 0:00:07.031 *********** 2025-06-22 11:57:55.923095 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:57:55.923106 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:57:55.923117 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:57:55.923152 | orchestrator | 2025-06-22 11:57:55.923164 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:57:55.923175 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:57:55.923187 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:57:55.923198 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:57:55.923209 | orchestrator | 2025-06-22 11:57:55.923220 | orchestrator | 2025-06-22 11:57:55.923230 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:57:55.923241 | orchestrator | Sunday 22 June 2025 11:57:35 +0000 (0:00:02.457) 0:00:09.492 *********** 2025-06-22 11:57:55.923252 | orchestrator | =============================================================================== 2025-06-22 11:57:55.923262 | orchestrator | memcached : Restart memcached container --------------------------------- 2.47s 2025-06-22 11:57:55.923273 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.92s 2025-06-22 11:57:55.923284 | orchestrator | memcached : Check memcached container ----------------------------------- 1.77s 2025-06-22 11:57:55.923295 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.78s 2025-06-22 11:57:55.923306 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.75s 2025-06-22 11:57:55.923316 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2025-06-22 11:57:55.923327 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.56s 2025-06-22 11:57:55.923338 | orchestrator | 2025-06-22 11:57:55.923348 | orchestrator | 2025-06-22 11:57:55.923359 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 11:57:55.923369 | orchestrator | 2025-06-22 11:57:55.923380 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 11:57:55.923391 | orchestrator | Sunday 22 June 2025 11:57:26 +0000 (0:00:00.198) 0:00:00.198 *********** 2025-06-22 11:57:55.923401 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:57:55.923412 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:57:55.923423 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:57:55.923433 | orchestrator | 2025-06-22 11:57:55.923444 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 11:57:55.923466 | orchestrator | Sunday 22 June 2025 11:57:26 +0000 (0:00:00.251) 0:00:00.450 *********** 2025-06-22 11:57:55.923478 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-06-22 11:57:55.923488 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-06-22 11:57:55.923579 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-06-22 11:57:55.923593 | orchestrator | 2025-06-22 11:57:55.923604 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-06-22 11:57:55.923615 | orchestrator | 2025-06-22 11:57:55.923625 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-06-22 11:57:55.923636 | orchestrator | Sunday 22 June 2025 11:57:27 +0000 (0:00:00.703) 0:00:01.154 *********** 2025-06-22 11:57:55.923647 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:57:55.923658 | orchestrator | 2025-06-22 11:57:55.923669 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-06-22 11:57:55.923680 | orchestrator | Sunday 22 June 2025 11:57:28 +0000 (0:00:00.777) 0:00:01.931 *********** 2025-06-22 11:57:55.923693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.923719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.923731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.923759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.923777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.923801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.923813 | orchestrator | 2025-06-22 11:57:55.923824 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-06-22 11:57:55.923835 | orchestrator | Sunday 22 June 2025 11:57:29 +0000 (0:00:01.343) 0:00:03.275 *********** 2025-06-22 11:57:55.923847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.923866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.923877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.923888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.923905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.923934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.923946 | orchestrator | 2025-06-22 11:57:55.923957 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-06-22 11:57:55.923968 | orchestrator | Sunday 22 June 2025 11:57:32 +0000 (0:00:02.865) 0:00:06.140 *********** 2025-06-22 11:57:55.923987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.923998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.924009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.924021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.924037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.924057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.924068 | orchestrator | 2025-06-22 11:57:55.924079 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-06-22 11:57:55.924097 | orchestrator | Sunday 22 June 2025 11:57:35 +0000 (0:00:02.996) 0:00:09.137 *********** 2025-06-22 11:57:55.924109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.924120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.924132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.924143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.924159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.924177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 11:57:55.924198 | orchestrator | 2025-06-22 11:57:55.924210 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-22 11:57:55.924221 | orchestrator | Sunday 22 June 2025 11:57:37 +0000 (0:00:02.043) 0:00:11.180 *********** 2025-06-22 11:57:55.924232 | orchestrator | 2025-06-22 11:57:55.924243 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-22 11:57:55.924254 | orchestrator | Sunday 22 June 2025 11:57:37 +0000 (0:00:00.066) 0:00:11.247 *********** 2025-06-22 11:57:55.924265 | orchestrator | 2025-06-22 11:57:55.924276 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-22 11:57:55.924287 | orchestrator | Sunday 22 June 2025 11:57:37 +0000 (0:00:00.068) 0:00:11.315 *********** 2025-06-22 11:57:55.924297 | orchestrator | 2025-06-22 11:57:55.924308 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-06-22 11:57:55.924319 | orchestrator | Sunday 22 June 2025 11:57:37 +0000 (0:00:00.108) 0:00:11.423 *********** 2025-06-22 11:57:55.924330 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:57:55.924340 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:57:55.924351 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:57:55.924362 | orchestrator | 2025-06-22 11:57:55.924373 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-06-22 11:57:55.924384 | orchestrator | Sunday 22 June 2025 11:57:47 +0000 (0:00:10.100) 0:00:21.524 *********** 2025-06-22 11:57:55.924395 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:57:55.924405 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:57:55.924416 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:57:55.924427 | orchestrator | 2025-06-22 11:57:55.924438 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:57:55.924449 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:57:55.924460 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:57:55.924471 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:57:55.924482 | orchestrator | 2025-06-22 11:57:55.924516 | orchestrator | 2025-06-22 11:57:55.924529 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:57:55.924540 | orchestrator | Sunday 22 June 2025 11:57:52 +0000 (0:00:04.588) 0:00:26.112 *********** 2025-06-22 11:57:55.924550 | orchestrator | =============================================================================== 2025-06-22 11:57:55.924561 | orchestrator | redis : Restart redis container ---------------------------------------- 10.10s 2025-06-22 11:57:55.924572 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.59s 2025-06-22 11:57:55.924582 | orchestrator | redis : Copying over redis config files --------------------------------- 3.00s 2025-06-22 11:57:55.924593 | orchestrator | redis : Copying over default config.json files -------------------------- 2.87s 2025-06-22 11:57:55.924604 | orchestrator | redis : Check redis containers ------------------------------------------ 2.04s 2025-06-22 11:57:55.924615 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.34s 2025-06-22 11:57:55.924625 | orchestrator | redis : include_tasks --------------------------------------------------- 0.78s 2025-06-22 11:57:55.924636 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2025-06-22 11:57:55.924647 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2025-06-22 11:57:55.924657 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.24s 2025-06-22 11:57:55.924751 | orchestrator | 2025-06-22 11:57:55 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:57:55.924765 | orchestrator | 2025-06-22 11:57:55 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:55.924792 | orchestrator | 2025-06-22 11:57:55 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:57:55.924809 | orchestrator | 2025-06-22 11:57:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:57:58.966953 | orchestrator | 2025-06-22 11:57:58 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:57:58.967316 | orchestrator | 2025-06-22 11:57:58 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:57:58.968847 | orchestrator | 2025-06-22 11:57:58 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:57:58.970628 | orchestrator | 2025-06-22 11:57:58 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:57:58.971459 | orchestrator | 2025-06-22 11:57:58 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:57:58.971489 | orchestrator | 2025-06-22 11:57:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:02.026600 | orchestrator | 2025-06-22 11:58:02 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:02.027034 | orchestrator | 2025-06-22 11:58:02 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:02.028030 | orchestrator | 2025-06-22 11:58:02 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:58:02.031457 | orchestrator | 2025-06-22 11:58:02 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:58:02.033654 | orchestrator | 2025-06-22 11:58:02 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:02.033684 | orchestrator | 2025-06-22 11:58:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:05.075223 | orchestrator | 2025-06-22 11:58:05 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:05.075325 | orchestrator | 2025-06-22 11:58:05 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:05.079205 | orchestrator | 2025-06-22 11:58:05 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:58:05.082398 | orchestrator | 2025-06-22 11:58:05 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:58:05.083392 | orchestrator | 2025-06-22 11:58:05 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:05.083440 | orchestrator | 2025-06-22 11:58:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:08.119394 | orchestrator | 2025-06-22 11:58:08 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:08.121197 | orchestrator | 2025-06-22 11:58:08 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:08.124757 | orchestrator | 2025-06-22 11:58:08 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:58:08.131131 | orchestrator | 2025-06-22 11:58:08 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:58:08.132994 | orchestrator | 2025-06-22 11:58:08 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:08.133212 | orchestrator | 2025-06-22 11:58:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:11.179358 | orchestrator | 2025-06-22 11:58:11 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:11.179713 | orchestrator | 2025-06-22 11:58:11 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:11.180721 | orchestrator | 2025-06-22 11:58:11 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:58:11.181798 | orchestrator | 2025-06-22 11:58:11 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:58:11.182923 | orchestrator | 2025-06-22 11:58:11 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:11.183289 | orchestrator | 2025-06-22 11:58:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:14.226368 | orchestrator | 2025-06-22 11:58:14 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:14.226801 | orchestrator | 2025-06-22 11:58:14 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:14.227527 | orchestrator | 2025-06-22 11:58:14 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:58:14.228275 | orchestrator | 2025-06-22 11:58:14 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:58:14.229140 | orchestrator | 2025-06-22 11:58:14 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:14.229220 | orchestrator | 2025-06-22 11:58:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:17.268578 | orchestrator | 2025-06-22 11:58:17 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:17.270011 | orchestrator | 2025-06-22 11:58:17 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:17.271133 | orchestrator | 2025-06-22 11:58:17 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:58:17.272126 | orchestrator | 2025-06-22 11:58:17 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:58:17.273045 | orchestrator | 2025-06-22 11:58:17 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:17.273217 | orchestrator | 2025-06-22 11:58:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:20.316024 | orchestrator | 2025-06-22 11:58:20 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:20.317792 | orchestrator | 2025-06-22 11:58:20 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:20.319550 | orchestrator | 2025-06-22 11:58:20 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:58:20.320975 | orchestrator | 2025-06-22 11:58:20 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:58:20.322882 | orchestrator | 2025-06-22 11:58:20 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:20.322921 | orchestrator | 2025-06-22 11:58:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:23.373962 | orchestrator | 2025-06-22 11:58:23 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:23.377310 | orchestrator | 2025-06-22 11:58:23 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:23.378879 | orchestrator | 2025-06-22 11:58:23 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:58:23.379360 | orchestrator | 2025-06-22 11:58:23 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:58:23.380336 | orchestrator | 2025-06-22 11:58:23 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:23.380361 | orchestrator | 2025-06-22 11:58:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:26.448444 | orchestrator | 2025-06-22 11:58:26 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:26.449939 | orchestrator | 2025-06-22 11:58:26 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:26.453003 | orchestrator | 2025-06-22 11:58:26 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:58:26.453286 | orchestrator | 2025-06-22 11:58:26 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:58:26.457391 | orchestrator | 2025-06-22 11:58:26 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:26.457417 | orchestrator | 2025-06-22 11:58:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:29.485856 | orchestrator | 2025-06-22 11:58:29 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:29.488299 | orchestrator | 2025-06-22 11:58:29 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:29.488983 | orchestrator | 2025-06-22 11:58:29 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state STARTED 2025-06-22 11:58:29.491807 | orchestrator | 2025-06-22 11:58:29 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:58:29.491851 | orchestrator | 2025-06-22 11:58:29 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:29.491865 | orchestrator | 2025-06-22 11:58:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:32.535660 | orchestrator | 2025-06-22 11:58:32 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:32.537799 | orchestrator | 2025-06-22 11:58:32 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:32.540648 | orchestrator | 2025-06-22 11:58:32 | INFO  | Task 8cfff133-8f12-46d6-bb3e-5e7842fcea09 is in state SUCCESS 2025-06-22 11:58:32.543330 | orchestrator | 2025-06-22 11:58:32.543376 | orchestrator | 2025-06-22 11:58:32.543389 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 11:58:32.543401 | orchestrator | 2025-06-22 11:58:32.543420 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 11:58:32.543433 | orchestrator | Sunday 22 June 2025 11:57:27 +0000 (0:00:00.325) 0:00:00.325 *********** 2025-06-22 11:58:32.543444 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:58:32.543456 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:58:32.543466 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:58:32.543540 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:58:32.543554 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:58:32.543564 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:58:32.543575 | orchestrator | 2025-06-22 11:58:32.543586 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 11:58:32.543598 | orchestrator | Sunday 22 June 2025 11:57:28 +0000 (0:00:01.118) 0:00:01.444 *********** 2025-06-22 11:58:32.543609 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 11:58:32.543620 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 11:58:32.543631 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 11:58:32.543642 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 11:58:32.543653 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 11:58:32.543663 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 11:58:32.543674 | orchestrator | 2025-06-22 11:58:32.543685 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-06-22 11:58:32.543695 | orchestrator | 2025-06-22 11:58:32.543706 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-06-22 11:58:32.543737 | orchestrator | Sunday 22 June 2025 11:57:29 +0000 (0:00:00.782) 0:00:02.226 *********** 2025-06-22 11:58:32.543749 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 11:58:32.543761 | orchestrator | 2025-06-22 11:58:32.543772 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-22 11:58:32.543782 | orchestrator | Sunday 22 June 2025 11:57:30 +0000 (0:00:01.367) 0:00:03.594 *********** 2025-06-22 11:58:32.543793 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-22 11:58:32.543804 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-22 11:58:32.543815 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-22 11:58:32.543825 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-22 11:58:32.543836 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-22 11:58:32.543847 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-22 11:58:32.543857 | orchestrator | 2025-06-22 11:58:32.543868 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-22 11:58:32.543878 | orchestrator | Sunday 22 June 2025 11:57:32 +0000 (0:00:01.221) 0:00:04.815 *********** 2025-06-22 11:58:32.543889 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-22 11:58:32.543899 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-22 11:58:32.543910 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-22 11:58:32.543920 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-22 11:58:32.543931 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-22 11:58:32.543941 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-22 11:58:32.543952 | orchestrator | 2025-06-22 11:58:32.543962 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-22 11:58:32.543973 | orchestrator | Sunday 22 June 2025 11:57:33 +0000 (0:00:01.577) 0:00:06.393 *********** 2025-06-22 11:58:32.543983 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-06-22 11:58:32.543994 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:32.544006 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-06-22 11:58:32.544017 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:32.544028 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-06-22 11:58:32.544038 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:32.544049 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-06-22 11:58:32.544059 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:32.544070 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-06-22 11:58:32.544080 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:32.544091 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-06-22 11:58:32.544101 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:32.544112 | orchestrator | 2025-06-22 11:58:32.544122 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-06-22 11:58:32.544133 | orchestrator | Sunday 22 June 2025 11:57:35 +0000 (0:00:01.372) 0:00:07.766 *********** 2025-06-22 11:58:32.544143 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:32.544154 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:32.544164 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:32.544175 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:32.544185 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:32.544196 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:32.544206 | orchestrator | 2025-06-22 11:58:32.544217 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-06-22 11:58:32.544228 | orchestrator | Sunday 22 June 2025 11:57:36 +0000 (0:00:01.337) 0:00:09.103 *********** 2025-06-22 11:58:32.544264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544313 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544375 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544386 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544409 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544448 | orchestrator | 2025-06-22 11:58:32.544459 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-06-22 11:58:32.544471 | orchestrator | Sunday 22 June 2025 11:57:38 +0000 (0:00:02.060) 0:00:11.163 *********** 2025-06-22 11:58:32.544501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544536 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544547 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544602 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544614 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544647 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544663 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544675 | orchestrator | 2025-06-22 11:58:32.544686 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-06-22 11:58:32.544697 | orchestrator | Sunday 22 June 2025 11:57:41 +0000 (0:00:03.453) 0:00:14.617 *********** 2025-06-22 11:58:32.544708 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:32.544719 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:32.544730 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:32.544741 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:32.544751 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:32.544762 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:32.544772 | orchestrator | 2025-06-22 11:58:32.544783 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-06-22 11:58:32.544794 | orchestrator | Sunday 22 June 2025 11:57:42 +0000 (0:00:00.725) 0:00:15.343 *********** 2025-06-22 11:58:32.544805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544833 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544867 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544890 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544901 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544958 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544969 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 11:58:32.544980 | orchestrator | 2025-06-22 11:58:32.544991 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 11:58:32.545002 | orchestrator | Sunday 22 June 2025 11:57:45 +0000 (0:00:02.731) 0:00:18.074 *********** 2025-06-22 11:58:32.545012 | orchestrator | 2025-06-22 11:58:32.545023 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 11:58:32.545034 | orchestrator | Sunday 22 June 2025 11:57:45 +0000 (0:00:00.143) 0:00:18.218 *********** 2025-06-22 11:58:32.545045 | orchestrator | 2025-06-22 11:58:32.545055 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 11:58:32.545066 | orchestrator | Sunday 22 June 2025 11:57:45 +0000 (0:00:00.147) 0:00:18.366 *********** 2025-06-22 11:58:32.545082 | orchestrator | 2025-06-22 11:58:32.545093 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 11:58:32.545103 | orchestrator | Sunday 22 June 2025 11:57:45 +0000 (0:00:00.129) 0:00:18.495 *********** 2025-06-22 11:58:32.545114 | orchestrator | 2025-06-22 11:58:32.545125 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 11:58:32.545136 | orchestrator | Sunday 22 June 2025 11:57:45 +0000 (0:00:00.143) 0:00:18.638 *********** 2025-06-22 11:58:32.545146 | orchestrator | 2025-06-22 11:58:32.545157 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 11:58:32.545167 | orchestrator | Sunday 22 June 2025 11:57:46 +0000 (0:00:00.150) 0:00:18.789 *********** 2025-06-22 11:58:32.545178 | orchestrator | 2025-06-22 11:58:32.545189 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-06-22 11:58:32.545199 | orchestrator | Sunday 22 June 2025 11:57:46 +0000 (0:00:00.457) 0:00:19.246 *********** 2025-06-22 11:58:32.545210 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:58:32.545221 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:58:32.545231 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:58:32.545242 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:58:32.545252 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:58:32.545263 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:58:32.545274 | orchestrator | 2025-06-22 11:58:32.545284 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-06-22 11:58:32.545295 | orchestrator | Sunday 22 June 2025 11:57:58 +0000 (0:00:11.770) 0:00:31.017 *********** 2025-06-22 11:58:32.545306 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:58:32.545316 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:58:32.545327 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:58:32.545337 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:58:32.545348 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:58:32.545358 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:58:32.545369 | orchestrator | 2025-06-22 11:58:32.545380 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-22 11:58:32.545390 | orchestrator | Sunday 22 June 2025 11:58:00 +0000 (0:00:01.963) 0:00:32.981 *********** 2025-06-22 11:58:32.545401 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:58:32.545412 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:58:32.545422 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:58:32.545433 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:58:32.545443 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:58:32.545454 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:58:32.545465 | orchestrator | 2025-06-22 11:58:32.545510 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-06-22 11:58:32.545562 | orchestrator | Sunday 22 June 2025 11:58:08 +0000 (0:00:08.191) 0:00:41.173 *********** 2025-06-22 11:58:32.545582 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-06-22 11:58:32.545598 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-06-22 11:58:32.545610 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-06-22 11:58:32.545620 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-06-22 11:58:32.545631 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-06-22 11:58:32.545642 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-06-22 11:58:32.545652 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-06-22 11:58:32.545663 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-06-22 11:58:32.545680 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-06-22 11:58:32.545690 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-06-22 11:58:32.545701 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-06-22 11:58:32.545711 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-06-22 11:58:32.545722 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 11:58:32.545732 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 11:58:32.545743 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 11:58:32.545753 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 11:58:32.545764 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 11:58:32.545774 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 11:58:32.545785 | orchestrator | 2025-06-22 11:58:32.545795 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-06-22 11:58:32.545806 | orchestrator | Sunday 22 June 2025 11:58:16 +0000 (0:00:07.868) 0:00:49.041 *********** 2025-06-22 11:58:32.545817 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-06-22 11:58:32.545827 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:32.545838 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-06-22 11:58:32.545848 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:32.545859 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-06-22 11:58:32.545869 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:32.545879 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-06-22 11:58:32.545890 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-06-22 11:58:32.545900 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-06-22 11:58:32.545911 | orchestrator | 2025-06-22 11:58:32.545921 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-06-22 11:58:32.545932 | orchestrator | Sunday 22 June 2025 11:58:18 +0000 (0:00:02.511) 0:00:51.552 *********** 2025-06-22 11:58:32.545943 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-06-22 11:58:32.545953 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:32.545964 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-06-22 11:58:32.545974 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:32.545984 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-06-22 11:58:32.545995 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:32.546005 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-06-22 11:58:32.546087 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-06-22 11:58:32.546102 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-06-22 11:58:32.546113 | orchestrator | 2025-06-22 11:58:32.546123 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-22 11:58:32.546134 | orchestrator | Sunday 22 June 2025 11:58:22 +0000 (0:00:03.861) 0:00:55.413 *********** 2025-06-22 11:58:32.546144 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:58:32.546155 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:58:32.546166 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:58:32.546176 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:58:32.546187 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:58:32.546208 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:58:32.546219 | orchestrator | 2025-06-22 11:58:32.546230 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:58:32.546241 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 11:58:32.546261 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 11:58:32.546277 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 11:58:32.546289 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 11:58:32.546299 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 11:58:32.546310 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 11:58:32.546321 | orchestrator | 2025-06-22 11:58:32.546332 | orchestrator | 2025-06-22 11:58:32.546343 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:58:32.546354 | orchestrator | Sunday 22 June 2025 11:58:31 +0000 (0:00:08.899) 0:01:04.313 *********** 2025-06-22 11:58:32.546364 | orchestrator | =============================================================================== 2025-06-22 11:58:32.546375 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.09s 2025-06-22 11:58:32.546386 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.77s 2025-06-22 11:58:32.546396 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.87s 2025-06-22 11:58:32.546407 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.86s 2025-06-22 11:58:32.546418 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.45s 2025-06-22 11:58:32.546428 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.73s 2025-06-22 11:58:32.546439 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.51s 2025-06-22 11:58:32.546450 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.06s 2025-06-22 11:58:32.546460 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.96s 2025-06-22 11:58:32.546471 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.58s 2025-06-22 11:58:32.546501 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.37s 2025-06-22 11:58:32.546512 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.37s 2025-06-22 11:58:32.546522 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.34s 2025-06-22 11:58:32.546533 | orchestrator | module-load : Load modules ---------------------------------------------- 1.22s 2025-06-22 11:58:32.546543 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.17s 2025-06-22 11:58:32.546554 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.12s 2025-06-22 11:58:32.546564 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.78s 2025-06-22 11:58:32.546575 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.73s 2025-06-22 11:58:32.546585 | orchestrator | 2025-06-22 11:58:32 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:58:32.546601 | orchestrator | 2025-06-22 11:58:32 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:32.546612 | orchestrator | 2025-06-22 11:58:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:35.591732 | orchestrator | 2025-06-22 11:58:35 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:35.592408 | orchestrator | 2025-06-22 11:58:35 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:35.593218 | orchestrator | 2025-06-22 11:58:35 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:58:35.596259 | orchestrator | 2025-06-22 11:58:35 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:58:35.598313 | orchestrator | 2025-06-22 11:58:35 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:35.599163 | orchestrator | 2025-06-22 11:58:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:38.633105 | orchestrator | 2025-06-22 11:58:38 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:38.633614 | orchestrator | 2025-06-22 11:58:38 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:38.634238 | orchestrator | 2025-06-22 11:58:38 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:58:38.635044 | orchestrator | 2025-06-22 11:58:38 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:58:38.641096 | orchestrator | 2025-06-22 11:58:38 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:38.641337 | orchestrator | 2025-06-22 11:58:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:41.673882 | orchestrator | 2025-06-22 11:58:41 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:41.674290 | orchestrator | 2025-06-22 11:58:41 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:41.676856 | orchestrator | 2025-06-22 11:58:41 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:58:41.678581 | orchestrator | 2025-06-22 11:58:41 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:58:41.679466 | orchestrator | 2025-06-22 11:58:41 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:41.679503 | orchestrator | 2025-06-22 11:58:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:44.723930 | orchestrator | 2025-06-22 11:58:44 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:44.724018 | orchestrator | 2025-06-22 11:58:44 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:44.725231 | orchestrator | 2025-06-22 11:58:44 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:58:44.725987 | orchestrator | 2025-06-22 11:58:44 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:58:44.726970 | orchestrator | 2025-06-22 11:58:44 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:44.726999 | orchestrator | 2025-06-22 11:58:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:47.767315 | orchestrator | 2025-06-22 11:58:47 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:47.767632 | orchestrator | 2025-06-22 11:58:47 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:47.770545 | orchestrator | 2025-06-22 11:58:47 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:58:47.771134 | orchestrator | 2025-06-22 11:58:47 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:58:47.774881 | orchestrator | 2025-06-22 11:58:47 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:47.774954 | orchestrator | 2025-06-22 11:58:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:50.803943 | orchestrator | 2025-06-22 11:58:50 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:50.805460 | orchestrator | 2025-06-22 11:58:50 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:50.806308 | orchestrator | 2025-06-22 11:58:50 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:58:50.807591 | orchestrator | 2025-06-22 11:58:50 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state STARTED 2025-06-22 11:58:50.808533 | orchestrator | 2025-06-22 11:58:50 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:50.808609 | orchestrator | 2025-06-22 11:58:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:53.846445 | orchestrator | 2025-06-22 11:58:53 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:53.848567 | orchestrator | 2025-06-22 11:58:53 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:53.849575 | orchestrator | 2025-06-22 11:58:53 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:58:53.854712 | orchestrator | 2025-06-22 11:58:53.854752 | orchestrator | 2025-06-22 11:58:53.854764 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-06-22 11:58:53.854775 | orchestrator | 2025-06-22 11:58:53.854787 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-06-22 11:58:53.854798 | orchestrator | Sunday 22 June 2025 11:54:16 +0000 (0:00:00.192) 0:00:00.192 *********** 2025-06-22 11:58:53.854809 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:58:53.854821 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:58:53.854833 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:58:53.854843 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:58:53.854854 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:58:53.854864 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:58:53.854874 | orchestrator | 2025-06-22 11:58:53.854885 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-06-22 11:58:53.854896 | orchestrator | Sunday 22 June 2025 11:54:16 +0000 (0:00:00.721) 0:00:00.914 *********** 2025-06-22 11:58:53.854906 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:53.854917 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:53.854928 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:53.854938 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.854949 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.854959 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.854970 | orchestrator | 2025-06-22 11:58:53.854980 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-06-22 11:58:53.854991 | orchestrator | Sunday 22 June 2025 11:54:17 +0000 (0:00:00.772) 0:00:01.686 *********** 2025-06-22 11:58:53.855009 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:53.855019 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:53.855030 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:53.855040 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.855051 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.855061 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.855072 | orchestrator | 2025-06-22 11:58:53.855082 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-06-22 11:58:53.855093 | orchestrator | Sunday 22 June 2025 11:54:18 +0000 (0:00:00.785) 0:00:02.472 *********** 2025-06-22 11:58:53.855103 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:58:53.855114 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:58:53.855141 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:58:53.855151 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:58:53.855162 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:58:53.855173 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:58:53.855183 | orchestrator | 2025-06-22 11:58:53.855194 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-06-22 11:58:53.855204 | orchestrator | Sunday 22 June 2025 11:54:20 +0000 (0:00:02.031) 0:00:04.503 *********** 2025-06-22 11:58:53.855215 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:58:53.855225 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:58:53.855236 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:58:53.855246 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:58:53.855257 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:58:53.855267 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:58:53.855277 | orchestrator | 2025-06-22 11:58:53.855288 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-06-22 11:58:53.855299 | orchestrator | Sunday 22 June 2025 11:54:21 +0000 (0:00:01.066) 0:00:05.569 *********** 2025-06-22 11:58:53.855311 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:58:53.855323 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:58:53.855335 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:58:53.855347 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:58:53.855359 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:58:53.855371 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:58:53.855382 | orchestrator | 2025-06-22 11:58:53.855395 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-06-22 11:58:53.855407 | orchestrator | Sunday 22 June 2025 11:54:22 +0000 (0:00:01.191) 0:00:06.761 *********** 2025-06-22 11:58:53.855419 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:53.855431 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:53.855442 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:53.855454 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.855466 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.855516 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.855528 | orchestrator | 2025-06-22 11:58:53.855540 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-06-22 11:58:53.855553 | orchestrator | Sunday 22 June 2025 11:54:23 +0000 (0:00:00.860) 0:00:07.621 *********** 2025-06-22 11:58:53.855565 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:53.855576 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:53.855588 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:53.855600 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.855612 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.855624 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.855636 | orchestrator | 2025-06-22 11:58:53.855648 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-06-22 11:58:53.855660 | orchestrator | Sunday 22 June 2025 11:54:24 +0000 (0:00:00.821) 0:00:08.442 *********** 2025-06-22 11:58:53.855672 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 11:58:53.855683 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 11:58:53.855693 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:53.855704 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 11:58:53.855715 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 11:58:53.855726 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:53.855737 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 11:58:53.855747 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 11:58:53.855758 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:53.855769 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 11:58:53.855799 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 11:58:53.855811 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.855822 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 11:58:53.855833 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 11:58:53.855844 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.855854 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 11:58:53.855865 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 11:58:53.855876 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.855887 | orchestrator | 2025-06-22 11:58:53.855897 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-06-22 11:58:53.855908 | orchestrator | Sunday 22 June 2025 11:54:25 +0000 (0:00:01.054) 0:00:09.497 *********** 2025-06-22 11:58:53.855919 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:53.855929 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:53.855940 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:53.855951 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.855961 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.855972 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.855983 | orchestrator | 2025-06-22 11:58:53.855994 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-06-22 11:58:53.856009 | orchestrator | Sunday 22 June 2025 11:54:26 +0000 (0:00:01.348) 0:00:10.845 *********** 2025-06-22 11:58:53.856020 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:58:53.856031 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:58:53.856042 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:58:53.856052 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:58:53.856063 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:58:53.856077 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:58:53.856096 | orchestrator | 2025-06-22 11:58:53.856114 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-06-22 11:58:53.856133 | orchestrator | Sunday 22 June 2025 11:54:27 +0000 (0:00:00.875) 0:00:11.720 *********** 2025-06-22 11:58:53.856149 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:58:53.856164 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:58:53.856181 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:58:53.856198 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:58:53.856217 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:58:53.856280 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:58:53.856294 | orchestrator | 2025-06-22 11:58:53.856305 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-06-22 11:58:53.856316 | orchestrator | Sunday 22 June 2025 11:54:33 +0000 (0:00:05.976) 0:00:17.696 *********** 2025-06-22 11:58:53.856326 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:53.856337 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:53.856347 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:53.856358 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.856369 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.856379 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.856390 | orchestrator | 2025-06-22 11:58:53.856401 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-06-22 11:58:53.856411 | orchestrator | Sunday 22 June 2025 11:54:34 +0000 (0:00:01.078) 0:00:18.775 *********** 2025-06-22 11:58:53.856422 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:53.856432 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:53.856443 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:53.856453 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.856464 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.856492 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.856503 | orchestrator | 2025-06-22 11:58:53.856514 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-06-22 11:58:53.856535 | orchestrator | Sunday 22 June 2025 11:54:36 +0000 (0:00:01.644) 0:00:20.419 *********** 2025-06-22 11:58:53.856546 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:53.856556 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:53.856567 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:53.856578 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.856589 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.856599 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.856610 | orchestrator | 2025-06-22 11:58:53.856621 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-06-22 11:58:53.856631 | orchestrator | Sunday 22 June 2025 11:54:37 +0000 (0:00:01.033) 0:00:21.453 *********** 2025-06-22 11:58:53.856642 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-06-22 11:58:53.856653 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-06-22 11:58:53.856664 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:53.856675 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-06-22 11:58:53.856685 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-06-22 11:58:53.856696 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:53.856707 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-06-22 11:58:53.856718 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-06-22 11:58:53.856728 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:53.856739 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-06-22 11:58:53.856750 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-06-22 11:58:53.856761 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.856771 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-06-22 11:58:53.856782 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-06-22 11:58:53.856793 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.856804 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-06-22 11:58:53.856814 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-06-22 11:58:53.856825 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.856835 | orchestrator | 2025-06-22 11:58:53.856846 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-06-22 11:58:53.856865 | orchestrator | Sunday 22 June 2025 11:54:38 +0000 (0:00:01.334) 0:00:22.788 *********** 2025-06-22 11:58:53.856877 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:53.856888 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:53.856898 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:53.856909 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.856919 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.856930 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.856940 | orchestrator | 2025-06-22 11:58:53.856951 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-06-22 11:58:53.856961 | orchestrator | 2025-06-22 11:58:53.856972 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-06-22 11:58:53.856983 | orchestrator | Sunday 22 June 2025 11:54:39 +0000 (0:00:01.342) 0:00:24.131 *********** 2025-06-22 11:58:53.856993 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:58:53.857004 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:58:53.857014 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:58:53.857025 | orchestrator | 2025-06-22 11:58:53.857061 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-06-22 11:58:53.857074 | orchestrator | Sunday 22 June 2025 11:54:41 +0000 (0:00:01.647) 0:00:25.778 *********** 2025-06-22 11:58:53.857085 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:58:53.857095 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:58:53.857106 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:58:53.857117 | orchestrator | 2025-06-22 11:58:53.857127 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-06-22 11:58:53.857151 | orchestrator | Sunday 22 June 2025 11:54:42 +0000 (0:00:01.368) 0:00:27.147 *********** 2025-06-22 11:58:53.857162 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:58:53.857173 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:58:53.857184 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:58:53.857194 | orchestrator | 2025-06-22 11:58:53.857205 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-06-22 11:58:53.857216 | orchestrator | Sunday 22 June 2025 11:54:44 +0000 (0:00:01.300) 0:00:28.448 *********** 2025-06-22 11:58:53.857226 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:58:53.857237 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:58:53.857248 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:58:53.857258 | orchestrator | 2025-06-22 11:58:53.857269 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-06-22 11:58:53.857280 | orchestrator | Sunday 22 June 2025 11:54:45 +0000 (0:00:00.823) 0:00:29.272 *********** 2025-06-22 11:58:53.857290 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.857301 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.857312 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.857322 | orchestrator | 2025-06-22 11:58:53.857333 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-06-22 11:58:53.857344 | orchestrator | Sunday 22 June 2025 11:54:45 +0000 (0:00:00.420) 0:00:29.692 *********** 2025-06-22 11:58:53.857354 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:58:53.857365 | orchestrator | 2025-06-22 11:58:53.857376 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-06-22 11:58:53.857386 | orchestrator | Sunday 22 June 2025 11:54:46 +0000 (0:00:00.710) 0:00:30.403 *********** 2025-06-22 11:58:53.857397 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:58:53.857407 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:58:53.857418 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:58:53.857428 | orchestrator | 2025-06-22 11:58:53.857439 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-06-22 11:58:53.857449 | orchestrator | Sunday 22 June 2025 11:54:49 +0000 (0:00:03.070) 0:00:33.474 *********** 2025-06-22 11:58:53.857460 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.857519 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.857531 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:58:53.857542 | orchestrator | 2025-06-22 11:58:53.857552 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-06-22 11:58:53.857563 | orchestrator | Sunday 22 June 2025 11:54:50 +0000 (0:00:00.933) 0:00:34.407 *********** 2025-06-22 11:58:53.857574 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.857584 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.857595 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:58:53.857605 | orchestrator | 2025-06-22 11:58:53.857615 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-06-22 11:58:53.857626 | orchestrator | Sunday 22 June 2025 11:54:51 +0000 (0:00:00.937) 0:00:35.345 *********** 2025-06-22 11:58:53.857637 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.857647 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.857658 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:58:53.857668 | orchestrator | 2025-06-22 11:58:53.857679 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-06-22 11:58:53.857689 | orchestrator | Sunday 22 June 2025 11:54:53 +0000 (0:00:02.540) 0:00:37.886 *********** 2025-06-22 11:58:53.857700 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.857710 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.857721 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.857731 | orchestrator | 2025-06-22 11:58:53.857742 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-06-22 11:58:53.857752 | orchestrator | Sunday 22 June 2025 11:54:54 +0000 (0:00:00.416) 0:00:38.302 *********** 2025-06-22 11:58:53.857770 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.857781 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.857791 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.857802 | orchestrator | 2025-06-22 11:58:53.857812 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-06-22 11:58:53.857823 | orchestrator | Sunday 22 June 2025 11:54:54 +0000 (0:00:00.370) 0:00:38.673 *********** 2025-06-22 11:58:53.857833 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:58:53.857844 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:58:53.857854 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:58:53.857865 | orchestrator | 2025-06-22 11:58:53.857876 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-06-22 11:58:53.857886 | orchestrator | Sunday 22 June 2025 11:54:56 +0000 (0:00:01.989) 0:00:40.663 *********** 2025-06-22 11:58:53.857904 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-22 11:58:53.857917 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-22 11:58:53.857928 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-22 11:58:53.857939 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-22 11:58:53.857949 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-22 11:58:53.857960 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-22 11:58:53.857969 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-22 11:58:53.857983 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-22 11:58:53.857993 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-22 11:58:53.858002 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-22 11:58:53.858012 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-22 11:58:53.858062 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-22 11:58:53.858072 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-22 11:58:53.858082 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-22 11:58:53.858091 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-22 11:58:53.858101 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:58:53.858111 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:58:53.858120 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:58:53.858130 | orchestrator | 2025-06-22 11:58:53.858139 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-06-22 11:58:53.858149 | orchestrator | Sunday 22 June 2025 11:55:52 +0000 (0:00:55.715) 0:01:36.378 *********** 2025-06-22 11:58:53.858159 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.858168 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.858184 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.858193 | orchestrator | 2025-06-22 11:58:53.858203 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-06-22 11:58:53.858212 | orchestrator | Sunday 22 June 2025 11:55:52 +0000 (0:00:00.371) 0:01:36.750 *********** 2025-06-22 11:58:53.858222 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:58:53.858231 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:58:53.858241 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:58:53.858250 | orchestrator | 2025-06-22 11:58:53.858260 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-06-22 11:58:53.858269 | orchestrator | Sunday 22 June 2025 11:55:53 +0000 (0:00:01.212) 0:01:37.962 *********** 2025-06-22 11:58:53.858279 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:58:53.858300 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:58:53.858310 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:58:53.858319 | orchestrator | 2025-06-22 11:58:53.858337 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-06-22 11:58:53.858347 | orchestrator | Sunday 22 June 2025 11:55:55 +0000 (0:00:01.311) 0:01:39.274 *********** 2025-06-22 11:58:53.858356 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:58:53.858366 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:58:53.858375 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:58:53.858385 | orchestrator | 2025-06-22 11:58:53.858394 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-06-22 11:58:53.858404 | orchestrator | Sunday 22 June 2025 11:56:07 +0000 (0:00:12.323) 0:01:51.598 *********** 2025-06-22 11:58:53.858413 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:58:53.858423 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:58:53.858432 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:58:53.858442 | orchestrator | 2025-06-22 11:58:53.858451 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-06-22 11:58:53.858461 | orchestrator | Sunday 22 June 2025 11:56:08 +0000 (0:00:00.814) 0:01:52.413 *********** 2025-06-22 11:58:53.858486 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:58:53.858496 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:58:53.858506 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:58:53.858515 | orchestrator | 2025-06-22 11:58:53.858525 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-06-22 11:58:53.858534 | orchestrator | Sunday 22 June 2025 11:56:08 +0000 (0:00:00.698) 0:01:53.111 *********** 2025-06-22 11:58:53.858544 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:58:53.858553 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:58:53.858563 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:58:53.858572 | orchestrator | 2025-06-22 11:58:53.858587 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-06-22 11:58:53.858597 | orchestrator | Sunday 22 June 2025 11:56:09 +0000 (0:00:00.678) 0:01:53.790 *********** 2025-06-22 11:58:53.858607 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:58:53.858635 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:58:53.858645 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:58:53.858654 | orchestrator | 2025-06-22 11:58:53.858664 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-06-22 11:58:53.858674 | orchestrator | Sunday 22 June 2025 11:56:10 +0000 (0:00:00.966) 0:01:54.757 *********** 2025-06-22 11:58:53.858683 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:58:53.858692 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:58:53.858702 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:58:53.858711 | orchestrator | 2025-06-22 11:58:53.858721 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-06-22 11:58:53.858731 | orchestrator | Sunday 22 June 2025 11:56:10 +0000 (0:00:00.347) 0:01:55.104 *********** 2025-06-22 11:58:53.858740 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:58:53.858750 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:58:53.858759 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:58:53.858769 | orchestrator | 2025-06-22 11:58:53.858778 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-06-22 11:58:53.858794 | orchestrator | Sunday 22 June 2025 11:56:11 +0000 (0:00:00.643) 0:01:55.748 *********** 2025-06-22 11:58:53.858803 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:58:53.858813 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:58:53.858822 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:58:53.858831 | orchestrator | 2025-06-22 11:58:53.858841 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-06-22 11:58:53.858850 | orchestrator | Sunday 22 June 2025 11:56:12 +0000 (0:00:00.705) 0:01:56.454 *********** 2025-06-22 11:58:53.858860 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:58:53.858869 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:58:53.858878 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:58:53.858888 | orchestrator | 2025-06-22 11:58:53.858897 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-06-22 11:58:53.858907 | orchestrator | Sunday 22 June 2025 11:56:13 +0000 (0:00:01.318) 0:01:57.773 *********** 2025-06-22 11:58:53.858936 | orchestrator | changed: [testbed-node-1] 2025-06-22 11:58:53.858946 | orchestrator | changed: [testbed-node-0] 2025-06-22 11:58:53.858956 | orchestrator | changed: [testbed-node-2] 2025-06-22 11:58:53.858965 | orchestrator | 2025-06-22 11:58:53.858974 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-06-22 11:58:53.858984 | orchestrator | Sunday 22 June 2025 11:56:14 +0000 (0:00:00.861) 0:01:58.634 *********** 2025-06-22 11:58:53.858993 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.859002 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.859012 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.859021 | orchestrator | 2025-06-22 11:58:53.859031 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-06-22 11:58:53.859040 | orchestrator | Sunday 22 June 2025 11:56:14 +0000 (0:00:00.306) 0:01:58.941 *********** 2025-06-22 11:58:53.859050 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.859059 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.859068 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.859078 | orchestrator | 2025-06-22 11:58:53.859087 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-06-22 11:58:53.859097 | orchestrator | Sunday 22 June 2025 11:56:15 +0000 (0:00:00.321) 0:01:59.262 *********** 2025-06-22 11:58:53.859106 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:58:53.859116 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:58:53.859125 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:58:53.859135 | orchestrator | 2025-06-22 11:58:53.859144 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-06-22 11:58:53.859154 | orchestrator | Sunday 22 June 2025 11:56:16 +0000 (0:00:00.954) 0:02:00.217 *********** 2025-06-22 11:58:53.859163 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:58:53.859172 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:58:53.859182 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:58:53.859191 | orchestrator | 2025-06-22 11:58:53.859201 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-06-22 11:58:53.859211 | orchestrator | Sunday 22 June 2025 11:56:16 +0000 (0:00:00.627) 0:02:00.844 *********** 2025-06-22 11:58:53.859220 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-22 11:58:53.859230 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-22 11:58:53.859239 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-22 11:58:53.859249 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-22 11:58:53.859258 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-22 11:58:53.859267 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-22 11:58:53.859283 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-22 11:58:53.859292 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-22 11:58:53.859302 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-22 11:58:53.859311 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-22 11:58:53.859321 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-06-22 11:58:53.859330 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-22 11:58:53.859345 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-22 11:58:53.859355 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-06-22 11:58:53.859365 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-22 11:58:53.859374 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-22 11:58:53.859936 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-22 11:58:53.859954 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-22 11:58:53.859966 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-22 11:58:53.859981 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-22 11:58:53.859992 | orchestrator | 2025-06-22 11:58:53.860001 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-06-22 11:58:53.860011 | orchestrator | 2025-06-22 11:58:53.860020 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-06-22 11:58:53.860030 | orchestrator | Sunday 22 June 2025 11:56:19 +0000 (0:00:03.140) 0:02:03.985 *********** 2025-06-22 11:58:53.860039 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:58:53.860049 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:58:53.860059 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:58:53.860068 | orchestrator | 2025-06-22 11:58:53.860078 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-06-22 11:58:53.860087 | orchestrator | Sunday 22 June 2025 11:56:20 +0000 (0:00:00.527) 0:02:04.513 *********** 2025-06-22 11:58:53.860097 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:58:53.860106 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:58:53.860116 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:58:53.860125 | orchestrator | 2025-06-22 11:58:53.860135 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-06-22 11:58:53.860145 | orchestrator | Sunday 22 June 2025 11:56:21 +0000 (0:00:00.706) 0:02:05.220 *********** 2025-06-22 11:58:53.860154 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:58:53.860164 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:58:53.860173 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:58:53.860183 | orchestrator | 2025-06-22 11:58:53.860192 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-06-22 11:58:53.860202 | orchestrator | Sunday 22 June 2025 11:56:21 +0000 (0:00:00.347) 0:02:05.567 *********** 2025-06-22 11:58:53.860212 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 11:58:53.860222 | orchestrator | 2025-06-22 11:58:53.860231 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-06-22 11:58:53.860241 | orchestrator | Sunday 22 June 2025 11:56:22 +0000 (0:00:00.832) 0:02:06.399 *********** 2025-06-22 11:58:53.860251 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:53.860260 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:53.860270 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:53.860286 | orchestrator | 2025-06-22 11:58:53.860296 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-06-22 11:58:53.860306 | orchestrator | Sunday 22 June 2025 11:56:22 +0000 (0:00:00.332) 0:02:06.732 *********** 2025-06-22 11:58:53.860315 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:53.860326 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:53.860342 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:53.860352 | orchestrator | 2025-06-22 11:58:53.860362 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-06-22 11:58:53.860371 | orchestrator | Sunday 22 June 2025 11:56:22 +0000 (0:00:00.343) 0:02:07.075 *********** 2025-06-22 11:58:53.860381 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:53.860390 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:53.860400 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:53.860409 | orchestrator | 2025-06-22 11:58:53.860418 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-06-22 11:58:53.860428 | orchestrator | Sunday 22 June 2025 11:56:23 +0000 (0:00:00.354) 0:02:07.430 *********** 2025-06-22 11:58:53.860438 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:58:53.860447 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:58:53.860457 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:58:53.860466 | orchestrator | 2025-06-22 11:58:53.860491 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-06-22 11:58:53.860501 | orchestrator | Sunday 22 June 2025 11:56:24 +0000 (0:00:01.637) 0:02:09.067 *********** 2025-06-22 11:58:53.860510 | orchestrator | changed: [testbed-node-3] 2025-06-22 11:58:53.860520 | orchestrator | changed: [testbed-node-5] 2025-06-22 11:58:53.860529 | orchestrator | changed: [testbed-node-4] 2025-06-22 11:58:53.860539 | orchestrator | 2025-06-22 11:58:53.860549 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-22 11:58:53.860558 | orchestrator | 2025-06-22 11:58:53.860568 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-22 11:58:53.860578 | orchestrator | Sunday 22 June 2025 11:56:34 +0000 (0:00:09.349) 0:02:18.416 *********** 2025-06-22 11:58:53.860587 | orchestrator | ok: [testbed-manager] 2025-06-22 11:58:53.860597 | orchestrator | 2025-06-22 11:58:53.860606 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-22 11:58:53.860616 | orchestrator | Sunday 22 June 2025 11:56:35 +0000 (0:00:00.946) 0:02:19.363 *********** 2025-06-22 11:58:53.860626 | orchestrator | changed: [testbed-manager] 2025-06-22 11:58:53.860635 | orchestrator | 2025-06-22 11:58:53.860644 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-22 11:58:53.860654 | orchestrator | Sunday 22 June 2025 11:56:35 +0000 (0:00:00.470) 0:02:19.833 *********** 2025-06-22 11:58:53.860664 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-22 11:58:53.860673 | orchestrator | 2025-06-22 11:58:53.860690 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-22 11:58:53.860700 | orchestrator | Sunday 22 June 2025 11:56:36 +0000 (0:00:01.119) 0:02:20.953 *********** 2025-06-22 11:58:53.860710 | orchestrator | changed: [testbed-manager] 2025-06-22 11:58:53.860719 | orchestrator | 2025-06-22 11:58:53.860729 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-22 11:58:53.860738 | orchestrator | Sunday 22 June 2025 11:56:37 +0000 (0:00:00.846) 0:02:21.799 *********** 2025-06-22 11:58:53.860748 | orchestrator | changed: [testbed-manager] 2025-06-22 11:58:53.860757 | orchestrator | 2025-06-22 11:58:53.860767 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-22 11:58:53.860783 | orchestrator | Sunday 22 June 2025 11:56:38 +0000 (0:00:00.628) 0:02:22.427 *********** 2025-06-22 11:58:53.860793 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 11:58:53.860802 | orchestrator | 2025-06-22 11:58:53.860812 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-22 11:58:53.860821 | orchestrator | Sunday 22 June 2025 11:56:39 +0000 (0:00:01.569) 0:02:23.996 *********** 2025-06-22 11:58:53.860836 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 11:58:53.860846 | orchestrator | 2025-06-22 11:58:53.860855 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-22 11:58:53.860864 | orchestrator | Sunday 22 June 2025 11:56:40 +0000 (0:00:00.874) 0:02:24.871 *********** 2025-06-22 11:58:53.860874 | orchestrator | changed: [testbed-manager] 2025-06-22 11:58:53.860883 | orchestrator | 2025-06-22 11:58:53.860893 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-22 11:58:53.860902 | orchestrator | Sunday 22 June 2025 11:56:41 +0000 (0:00:00.563) 0:02:25.435 *********** 2025-06-22 11:58:53.860912 | orchestrator | changed: [testbed-manager] 2025-06-22 11:58:53.860921 | orchestrator | 2025-06-22 11:58:53.860931 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-06-22 11:58:53.860940 | orchestrator | 2025-06-22 11:58:53.860950 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-06-22 11:58:53.860959 | orchestrator | Sunday 22 June 2025 11:56:41 +0000 (0:00:00.451) 0:02:25.887 *********** 2025-06-22 11:58:53.860968 | orchestrator | ok: [testbed-manager] 2025-06-22 11:58:53.860978 | orchestrator | 2025-06-22 11:58:53.860987 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-06-22 11:58:53.860996 | orchestrator | Sunday 22 June 2025 11:56:41 +0000 (0:00:00.173) 0:02:26.060 *********** 2025-06-22 11:58:53.861006 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 11:58:53.861015 | orchestrator | 2025-06-22 11:58:53.861025 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-06-22 11:58:53.861034 | orchestrator | Sunday 22 June 2025 11:56:42 +0000 (0:00:00.439) 0:02:26.499 *********** 2025-06-22 11:58:53.861043 | orchestrator | ok: [testbed-manager] 2025-06-22 11:58:53.861053 | orchestrator | 2025-06-22 11:58:53.861062 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-06-22 11:58:53.861072 | orchestrator | Sunday 22 June 2025 11:56:43 +0000 (0:00:00.976) 0:02:27.476 *********** 2025-06-22 11:58:53.861081 | orchestrator | ok: [testbed-manager] 2025-06-22 11:58:53.861091 | orchestrator | 2025-06-22 11:58:53.861100 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-06-22 11:58:53.861110 | orchestrator | Sunday 22 June 2025 11:56:45 +0000 (0:00:01.913) 0:02:29.389 *********** 2025-06-22 11:58:53.861119 | orchestrator | changed: [testbed-manager] 2025-06-22 11:58:53.861128 | orchestrator | 2025-06-22 11:58:53.861138 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-06-22 11:58:53.861147 | orchestrator | Sunday 22 June 2025 11:56:46 +0000 (0:00:00.958) 0:02:30.348 *********** 2025-06-22 11:58:53.861157 | orchestrator | ok: [testbed-manager] 2025-06-22 11:58:53.861166 | orchestrator | 2025-06-22 11:58:53.861176 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-06-22 11:58:53.861185 | orchestrator | Sunday 22 June 2025 11:56:46 +0000 (0:00:00.448) 0:02:30.797 *********** 2025-06-22 11:58:53.861194 | orchestrator | changed: [testbed-manager] 2025-06-22 11:58:53.861204 | orchestrator | 2025-06-22 11:58:53.861213 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-06-22 11:58:53.861223 | orchestrator | Sunday 22 June 2025 11:56:53 +0000 (0:00:07.208) 0:02:38.006 *********** 2025-06-22 11:58:53.861232 | orchestrator | changed: [testbed-manager] 2025-06-22 11:58:53.861241 | orchestrator | 2025-06-22 11:58:53.861251 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-06-22 11:58:53.861260 | orchestrator | Sunday 22 June 2025 11:57:06 +0000 (0:00:12.821) 0:02:50.828 *********** 2025-06-22 11:58:53.861270 | orchestrator | ok: [testbed-manager] 2025-06-22 11:58:53.861279 | orchestrator | 2025-06-22 11:58:53.861288 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-06-22 11:58:53.861298 | orchestrator | 2025-06-22 11:58:53.861307 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-06-22 11:58:53.861322 | orchestrator | Sunday 22 June 2025 11:57:07 +0000 (0:00:00.552) 0:02:51.380 *********** 2025-06-22 11:58:53.861332 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:58:53.861341 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:58:53.861350 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:58:53.861360 | orchestrator | 2025-06-22 11:58:53.861369 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-06-22 11:58:53.861380 | orchestrator | Sunday 22 June 2025 11:57:07 +0000 (0:00:00.585) 0:02:51.967 *********** 2025-06-22 11:58:53.861396 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.861406 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.861415 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.861425 | orchestrator | 2025-06-22 11:58:53.861434 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-06-22 11:58:53.861445 | orchestrator | Sunday 22 June 2025 11:57:08 +0000 (0:00:00.304) 0:02:52.271 *********** 2025-06-22 11:58:53.861461 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 11:58:53.861519 | orchestrator | 2025-06-22 11:58:53.861530 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-06-22 11:58:53.861546 | orchestrator | Sunday 22 June 2025 11:57:08 +0000 (0:00:00.501) 0:02:52.773 *********** 2025-06-22 11:58:53.861556 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 11:58:53.861565 | orchestrator | 2025-06-22 11:58:53.861575 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-06-22 11:58:53.861584 | orchestrator | Sunday 22 June 2025 11:57:09 +0000 (0:00:01.333) 0:02:54.106 *********** 2025-06-22 11:58:53.861594 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 11:58:53.861603 | orchestrator | 2025-06-22 11:58:53.861613 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-06-22 11:58:53.861627 | orchestrator | Sunday 22 June 2025 11:57:10 +0000 (0:00:00.887) 0:02:54.994 *********** 2025-06-22 11:58:53.861637 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.861646 | orchestrator | 2025-06-22 11:58:53.861656 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-06-22 11:58:53.861666 | orchestrator | Sunday 22 June 2025 11:57:11 +0000 (0:00:00.210) 0:02:55.205 *********** 2025-06-22 11:58:53.861675 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 11:58:53.861683 | orchestrator | 2025-06-22 11:58:53.861690 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-06-22 11:58:53.861698 | orchestrator | Sunday 22 June 2025 11:57:12 +0000 (0:00:01.046) 0:02:56.252 *********** 2025-06-22 11:58:53.861706 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.861714 | orchestrator | 2025-06-22 11:58:53.861722 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-06-22 11:58:53.861729 | orchestrator | Sunday 22 June 2025 11:57:12 +0000 (0:00:00.217) 0:02:56.469 *********** 2025-06-22 11:58:53.861737 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.861745 | orchestrator | 2025-06-22 11:58:53.861753 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-06-22 11:58:53.861761 | orchestrator | Sunday 22 June 2025 11:57:12 +0000 (0:00:00.199) 0:02:56.669 *********** 2025-06-22 11:58:53.861768 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.861776 | orchestrator | 2025-06-22 11:58:53.861784 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-06-22 11:58:53.861792 | orchestrator | Sunday 22 June 2025 11:57:12 +0000 (0:00:00.280) 0:02:56.949 *********** 2025-06-22 11:58:53.861799 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.861807 | orchestrator | 2025-06-22 11:58:53.861815 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-06-22 11:58:53.861823 | orchestrator | Sunday 22 June 2025 11:57:12 +0000 (0:00:00.227) 0:02:57.177 *********** 2025-06-22 11:58:53.861831 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 11:58:53.861839 | orchestrator | 2025-06-22 11:58:53.861846 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-06-22 11:58:53.861859 | orchestrator | Sunday 22 June 2025 11:57:17 +0000 (0:00:04.910) 0:03:02.088 *********** 2025-06-22 11:58:53.861867 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-06-22 11:58:53.861875 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-06-22 11:58:53.861883 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-06-22 11:58:53.861890 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-06-22 11:58:53.861898 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-06-22 11:58:53.861906 | orchestrator | 2025-06-22 11:58:53.861914 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-06-22 11:58:53.861924 | orchestrator | Sunday 22 June 2025 11:58:21 +0000 (0:01:03.399) 0:04:05.487 *********** 2025-06-22 11:58:53.861937 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 11:58:53.861945 | orchestrator | 2025-06-22 11:58:53.861953 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-06-22 11:58:53.861961 | orchestrator | Sunday 22 June 2025 11:58:22 +0000 (0:00:01.474) 0:04:06.961 *********** 2025-06-22 11:58:53.861969 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 11:58:53.861976 | orchestrator | 2025-06-22 11:58:53.861984 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-06-22 11:58:53.861992 | orchestrator | Sunday 22 June 2025 11:58:24 +0000 (0:00:02.112) 0:04:09.074 *********** 2025-06-22 11:58:53.862000 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 11:58:53.862007 | orchestrator | 2025-06-22 11:58:53.862039 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-06-22 11:58:53.862049 | orchestrator | Sunday 22 June 2025 11:58:26 +0000 (0:00:01.586) 0:04:10.660 *********** 2025-06-22 11:58:53.862057 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.862065 | orchestrator | 2025-06-22 11:58:53.862072 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-06-22 11:58:53.862080 | orchestrator | Sunday 22 June 2025 11:58:26 +0000 (0:00:00.240) 0:04:10.901 *********** 2025-06-22 11:58:53.862088 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-06-22 11:58:53.862095 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-06-22 11:58:53.862103 | orchestrator | 2025-06-22 11:58:53.862111 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-06-22 11:58:53.862119 | orchestrator | Sunday 22 June 2025 11:58:28 +0000 (0:00:01.872) 0:04:12.774 *********** 2025-06-22 11:58:53.862126 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.862134 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.862142 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.862149 | orchestrator | 2025-06-22 11:58:53.862157 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-06-22 11:58:53.862165 | orchestrator | Sunday 22 June 2025 11:58:28 +0000 (0:00:00.308) 0:04:13.083 *********** 2025-06-22 11:58:53.862172 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:58:53.862180 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:58:53.862188 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:58:53.862196 | orchestrator | 2025-06-22 11:58:53.862208 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-06-22 11:58:53.862216 | orchestrator | 2025-06-22 11:58:53.862224 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-06-22 11:58:53.862232 | orchestrator | Sunday 22 June 2025 11:58:29 +0000 (0:00:00.792) 0:04:13.875 *********** 2025-06-22 11:58:53.862240 | orchestrator | ok: [testbed-manager] 2025-06-22 11:58:53.862247 | orchestrator | 2025-06-22 11:58:53.862255 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-06-22 11:58:53.862263 | orchestrator | Sunday 22 June 2025 11:58:29 +0000 (0:00:00.254) 0:04:14.130 *********** 2025-06-22 11:58:53.862279 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 11:58:53.862287 | orchestrator | 2025-06-22 11:58:53.862294 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-06-22 11:58:53.862302 | orchestrator | Sunday 22 June 2025 11:58:30 +0000 (0:00:00.220) 0:04:14.351 *********** 2025-06-22 11:58:53.862310 | orchestrator | changed: [testbed-manager] 2025-06-22 11:58:53.862318 | orchestrator | 2025-06-22 11:58:53.862325 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-06-22 11:58:53.862333 | orchestrator | 2025-06-22 11:58:53.862341 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-06-22 11:58:53.862348 | orchestrator | Sunday 22 June 2025 11:58:35 +0000 (0:00:05.758) 0:04:20.109 *********** 2025-06-22 11:58:53.862356 | orchestrator | ok: [testbed-node-3] 2025-06-22 11:58:53.862364 | orchestrator | ok: [testbed-node-4] 2025-06-22 11:58:53.862372 | orchestrator | ok: [testbed-node-5] 2025-06-22 11:58:53.862380 | orchestrator | ok: [testbed-node-0] 2025-06-22 11:58:53.862387 | orchestrator | ok: [testbed-node-1] 2025-06-22 11:58:53.862395 | orchestrator | ok: [testbed-node-2] 2025-06-22 11:58:53.862403 | orchestrator | 2025-06-22 11:58:53.862411 | orchestrator | TASK [Manage labels] *********************************************************** 2025-06-22 11:58:53.862418 | orchestrator | Sunday 22 June 2025 11:58:36 +0000 (0:00:00.543) 0:04:20.653 *********** 2025-06-22 11:58:53.862426 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-22 11:58:53.862434 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-22 11:58:53.862442 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-22 11:58:53.862449 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-22 11:58:53.862457 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-22 11:58:53.862465 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-22 11:58:53.862486 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-22 11:58:53.862494 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-22 11:58:53.862502 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-22 11:58:53.862510 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-22 11:58:53.862518 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-22 11:58:53.862525 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-22 11:58:53.862533 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-22 11:58:53.862541 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-22 11:58:53.862549 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-22 11:58:53.862556 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-22 11:58:53.862564 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-22 11:58:53.862572 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-22 11:58:53.862580 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-22 11:58:53.862588 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-22 11:58:53.862595 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-22 11:58:53.862603 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-22 11:58:53.862616 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-22 11:58:53.862624 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-22 11:58:53.862631 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-22 11:58:53.862639 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-22 11:58:53.862647 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-22 11:58:53.862655 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-22 11:58:53.862663 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-22 11:58:53.862671 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-22 11:58:53.862679 | orchestrator | 2025-06-22 11:58:53.862691 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-06-22 11:58:53.862699 | orchestrator | Sunday 22 June 2025 11:58:49 +0000 (0:00:13.106) 0:04:33.759 *********** 2025-06-22 11:58:53.862707 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:53.862715 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:53.862723 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:53.862730 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.862738 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.862746 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.862753 | orchestrator | 2025-06-22 11:58:53.862761 | orchestrator | TASK [Manage taints] *********************************************************** 2025-06-22 11:58:53.862772 | orchestrator | Sunday 22 June 2025 11:58:50 +0000 (0:00:00.619) 0:04:34.378 *********** 2025-06-22 11:58:53.862780 | orchestrator | skipping: [testbed-node-3] 2025-06-22 11:58:53.862788 | orchestrator | skipping: [testbed-node-4] 2025-06-22 11:58:53.862796 | orchestrator | skipping: [testbed-node-5] 2025-06-22 11:58:53.862803 | orchestrator | skipping: [testbed-node-0] 2025-06-22 11:58:53.862811 | orchestrator | skipping: [testbed-node-1] 2025-06-22 11:58:53.862819 | orchestrator | skipping: [testbed-node-2] 2025-06-22 11:58:53.862826 | orchestrator | 2025-06-22 11:58:53.862834 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 11:58:53.862842 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 11:58:53.862851 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-06-22 11:58:53.862859 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-22 11:58:53.862867 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-22 11:58:53.862875 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-22 11:58:53.862883 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-22 11:58:53.862890 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-22 11:58:53.862898 | orchestrator | 2025-06-22 11:58:53.862906 | orchestrator | 2025-06-22 11:58:53.862914 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 11:58:53.862922 | orchestrator | Sunday 22 June 2025 11:58:50 +0000 (0:00:00.537) 0:04:34.916 *********** 2025-06-22 11:58:53.862929 | orchestrator | =============================================================================== 2025-06-22 11:58:53.862941 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 63.40s 2025-06-22 11:58:53.862949 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.72s 2025-06-22 11:58:53.862957 | orchestrator | Manage labels ---------------------------------------------------------- 13.11s 2025-06-22 11:58:53.862965 | orchestrator | kubectl : Install required packages ------------------------------------ 12.82s 2025-06-22 11:58:53.862972 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 12.32s 2025-06-22 11:58:53.862980 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.35s 2025-06-22 11:58:53.862988 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.21s 2025-06-22 11:58:53.862996 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.98s 2025-06-22 11:58:53.863003 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.76s 2025-06-22 11:58:53.863011 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.91s 2025-06-22 11:58:53.863019 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.14s 2025-06-22 11:58:53.863026 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.07s 2025-06-22 11:58:53.863034 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.54s 2025-06-22 11:58:53.863042 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.11s 2025-06-22 11:58:53.863050 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.03s 2025-06-22 11:58:53.863057 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.99s 2025-06-22 11:58:53.863065 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.91s 2025-06-22 11:58:53.863073 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.87s 2025-06-22 11:58:53.863080 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.65s 2025-06-22 11:58:53.863088 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.64s 2025-06-22 11:58:53.863096 | orchestrator | 2025-06-22 11:58:53 | INFO  | Task 6f1577bd-96e2-4f32-8170-dd39a356bd8e is in state SUCCESS 2025-06-22 11:58:53.863108 | orchestrator | 2025-06-22 11:58:53 | INFO  | Task 6b3a467a-34ac-4ff3-8b9c-8c75a64b2510 is in state STARTED 2025-06-22 11:58:53.863116 | orchestrator | 2025-06-22 11:58:53 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:53.863124 | orchestrator | 2025-06-22 11:58:53 | INFO  | Task 0bc88686-b175-4261-9b5f-ccfb8337298d is in state STARTED 2025-06-22 11:58:53.863132 | orchestrator | 2025-06-22 11:58:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:56.938061 | orchestrator | 2025-06-22 11:58:56 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:56.938744 | orchestrator | 2025-06-22 11:58:56 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:56.939132 | orchestrator | 2025-06-22 11:58:56 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:58:56.942928 | orchestrator | 2025-06-22 11:58:56 | INFO  | Task 6b3a467a-34ac-4ff3-8b9c-8c75a64b2510 is in state STARTED 2025-06-22 11:58:56.942958 | orchestrator | 2025-06-22 11:58:56 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:56.942969 | orchestrator | 2025-06-22 11:58:56 | INFO  | Task 0bc88686-b175-4261-9b5f-ccfb8337298d is in state STARTED 2025-06-22 11:58:56.942981 | orchestrator | 2025-06-22 11:58:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:58:59.982306 | orchestrator | 2025-06-22 11:58:59 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:58:59.982636 | orchestrator | 2025-06-22 11:58:59 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:58:59.983325 | orchestrator | 2025-06-22 11:58:59 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:58:59.986291 | orchestrator | 2025-06-22 11:58:59 | INFO  | Task 6b3a467a-34ac-4ff3-8b9c-8c75a64b2510 is in state STARTED 2025-06-22 11:58:59.987115 | orchestrator | 2025-06-22 11:58:59 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:58:59.987563 | orchestrator | 2025-06-22 11:58:59 | INFO  | Task 0bc88686-b175-4261-9b5f-ccfb8337298d is in state SUCCESS 2025-06-22 11:58:59.987595 | orchestrator | 2025-06-22 11:58:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:59:03.029172 | orchestrator | 2025-06-22 11:59:03 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:59:03.030987 | orchestrator | 2025-06-22 11:59:03 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:59:03.032821 | orchestrator | 2025-06-22 11:59:03 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:59:03.033400 | orchestrator | 2025-06-22 11:59:03 | INFO  | Task 6b3a467a-34ac-4ff3-8b9c-8c75a64b2510 is in state SUCCESS 2025-06-22 11:59:03.034440 | orchestrator | 2025-06-22 11:59:03 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:59:03.034672 | orchestrator | 2025-06-22 11:59:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:59:06.081031 | orchestrator | 2025-06-22 11:59:06 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:59:06.081163 | orchestrator | 2025-06-22 11:59:06 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:59:06.082291 | orchestrator | 2025-06-22 11:59:06 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:59:06.083250 | orchestrator | 2025-06-22 11:59:06 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:59:06.085673 | orchestrator | 2025-06-22 11:59:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:59:09.121948 | orchestrator | 2025-06-22 11:59:09 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:59:09.123286 | orchestrator | 2025-06-22 11:59:09 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:59:09.124659 | orchestrator | 2025-06-22 11:59:09 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:59:09.126063 | orchestrator | 2025-06-22 11:59:09 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:59:09.126248 | orchestrator | 2025-06-22 11:59:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:59:12.162871 | orchestrator | 2025-06-22 11:59:12 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:59:12.162970 | orchestrator | 2025-06-22 11:59:12 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:59:12.163746 | orchestrator | 2025-06-22 11:59:12 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:59:12.164578 | orchestrator | 2025-06-22 11:59:12 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:59:12.164748 | orchestrator | 2025-06-22 11:59:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:59:15.218678 | orchestrator | 2025-06-22 11:59:15 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:59:15.225320 | orchestrator | 2025-06-22 11:59:15 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:59:15.230266 | orchestrator | 2025-06-22 11:59:15 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:59:15.232154 | orchestrator | 2025-06-22 11:59:15 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:59:15.232862 | orchestrator | 2025-06-22 11:59:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:59:18.272603 | orchestrator | 2025-06-22 11:59:18 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:59:18.272705 | orchestrator | 2025-06-22 11:59:18 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:59:18.273245 | orchestrator | 2025-06-22 11:59:18 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:59:18.276751 | orchestrator | 2025-06-22 11:59:18 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:59:18.277148 | orchestrator | 2025-06-22 11:59:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:59:21.331012 | orchestrator | 2025-06-22 11:59:21 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:59:21.335514 | orchestrator | 2025-06-22 11:59:21 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:59:21.341008 | orchestrator | 2025-06-22 11:59:21 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:59:21.343978 | orchestrator | 2025-06-22 11:59:21 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:59:21.344398 | orchestrator | 2025-06-22 11:59:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:59:24.388582 | orchestrator | 2025-06-22 11:59:24 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:59:24.389586 | orchestrator | 2025-06-22 11:59:24 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:59:24.391319 | orchestrator | 2025-06-22 11:59:24 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:59:24.392361 | orchestrator | 2025-06-22 11:59:24 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:59:24.392376 | orchestrator | 2025-06-22 11:59:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:59:27.428677 | orchestrator | 2025-06-22 11:59:27 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:59:27.432583 | orchestrator | 2025-06-22 11:59:27 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:59:27.434813 | orchestrator | 2025-06-22 11:59:27 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:59:27.437814 | orchestrator | 2025-06-22 11:59:27 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:59:27.437879 | orchestrator | 2025-06-22 11:59:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:59:30.506965 | orchestrator | 2025-06-22 11:59:30 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:59:30.510540 | orchestrator | 2025-06-22 11:59:30 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:59:30.512242 | orchestrator | 2025-06-22 11:59:30 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:59:30.516437 | orchestrator | 2025-06-22 11:59:30 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:59:30.516530 | orchestrator | 2025-06-22 11:59:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:59:33.563744 | orchestrator | 2025-06-22 11:59:33 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:59:33.567240 | orchestrator | 2025-06-22 11:59:33 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:59:33.570552 | orchestrator | 2025-06-22 11:59:33 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:59:33.573124 | orchestrator | 2025-06-22 11:59:33 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:59:33.573170 | orchestrator | 2025-06-22 11:59:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:59:36.622255 | orchestrator | 2025-06-22 11:59:36 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:59:36.623729 | orchestrator | 2025-06-22 11:59:36 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:59:36.626962 | orchestrator | 2025-06-22 11:59:36 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:59:36.628931 | orchestrator | 2025-06-22 11:59:36 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:59:36.629061 | orchestrator | 2025-06-22 11:59:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:59:39.689975 | orchestrator | 2025-06-22 11:59:39 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:59:39.692278 | orchestrator | 2025-06-22 11:59:39 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:59:39.694345 | orchestrator | 2025-06-22 11:59:39 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:59:39.696201 | orchestrator | 2025-06-22 11:59:39 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:59:39.696595 | orchestrator | 2025-06-22 11:59:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:59:42.748259 | orchestrator | 2025-06-22 11:59:42 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:59:42.748622 | orchestrator | 2025-06-22 11:59:42 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:59:42.749587 | orchestrator | 2025-06-22 11:59:42 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:59:42.754507 | orchestrator | 2025-06-22 11:59:42 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:59:42.754545 | orchestrator | 2025-06-22 11:59:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:59:45.808962 | orchestrator | 2025-06-22 11:59:45 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:59:45.810587 | orchestrator | 2025-06-22 11:59:45 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:59:45.812595 | orchestrator | 2025-06-22 11:59:45 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:59:45.814249 | orchestrator | 2025-06-22 11:59:45 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:59:45.814285 | orchestrator | 2025-06-22 11:59:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:59:48.869364 | orchestrator | 2025-06-22 11:59:48 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:59:48.871702 | orchestrator | 2025-06-22 11:59:48 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:59:48.874424 | orchestrator | 2025-06-22 11:59:48 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:59:48.876647 | orchestrator | 2025-06-22 11:59:48 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:59:48.877039 | orchestrator | 2025-06-22 11:59:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:59:51.923570 | orchestrator | 2025-06-22 11:59:51 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:59:51.924766 | orchestrator | 2025-06-22 11:59:51 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:59:51.926319 | orchestrator | 2025-06-22 11:59:51 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:59:51.927239 | orchestrator | 2025-06-22 11:59:51 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:59:51.927271 | orchestrator | 2025-06-22 11:59:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:59:54.974001 | orchestrator | 2025-06-22 11:59:54 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:59:54.974490 | orchestrator | 2025-06-22 11:59:54 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:59:54.976482 | orchestrator | 2025-06-22 11:59:54 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:59:54.977032 | orchestrator | 2025-06-22 11:59:54 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:59:54.977062 | orchestrator | 2025-06-22 11:59:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 11:59:58.025787 | orchestrator | 2025-06-22 11:59:58 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 11:59:58.029230 | orchestrator | 2025-06-22 11:59:58 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 11:59:58.029696 | orchestrator | 2025-06-22 11:59:58 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 11:59:58.030257 | orchestrator | 2025-06-22 11:59:58 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 11:59:58.030279 | orchestrator | 2025-06-22 11:59:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:01.074884 | orchestrator | 2025-06-22 12:00:01 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:01.080911 | orchestrator | 2025-06-22 12:00:01 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state STARTED 2025-06-22 12:00:01.080957 | orchestrator | 2025-06-22 12:00:01 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:01.082112 | orchestrator | 2025-06-22 12:00:01 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:01.082137 | orchestrator | 2025-06-22 12:00:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:04.128211 | orchestrator | 2025-06-22 12:00:04 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:04.131007 | orchestrator | 2025-06-22 12:00:04.131082 | orchestrator | 2025-06-22 12:00:04.131106 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-06-22 12:00:04.131119 | orchestrator | 2025-06-22 12:00:04.131131 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-22 12:00:04.131143 | orchestrator | Sunday 22 June 2025 11:58:55 +0000 (0:00:00.119) 0:00:00.119 *********** 2025-06-22 12:00:04.131155 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-22 12:00:04.131166 | orchestrator | 2025-06-22 12:00:04.131177 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-22 12:00:04.131188 | orchestrator | Sunday 22 June 2025 11:58:56 +0000 (0:00:00.711) 0:00:00.831 *********** 2025-06-22 12:00:04.131226 | orchestrator | changed: [testbed-manager] 2025-06-22 12:00:04.131238 | orchestrator | 2025-06-22 12:00:04.131249 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-06-22 12:00:04.131260 | orchestrator | Sunday 22 June 2025 11:58:57 +0000 (0:00:01.011) 0:00:01.843 *********** 2025-06-22 12:00:04.131271 | orchestrator | changed: [testbed-manager] 2025-06-22 12:00:04.131282 | orchestrator | 2025-06-22 12:00:04.131293 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:00:04.131305 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:00:04.131317 | orchestrator | 2025-06-22 12:00:04.131328 | orchestrator | 2025-06-22 12:00:04.131339 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:00:04.131350 | orchestrator | Sunday 22 June 2025 11:58:57 +0000 (0:00:00.524) 0:00:02.367 *********** 2025-06-22 12:00:04.131361 | orchestrator | =============================================================================== 2025-06-22 12:00:04.131371 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.01s 2025-06-22 12:00:04.131382 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.71s 2025-06-22 12:00:04.131393 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.52s 2025-06-22 12:00:04.131404 | orchestrator | 2025-06-22 12:00:04.131414 | orchestrator | 2025-06-22 12:00:04.131425 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-22 12:00:04.131510 | orchestrator | 2025-06-22 12:00:04.131523 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-22 12:00:04.131540 | orchestrator | Sunday 22 June 2025 11:58:54 +0000 (0:00:00.164) 0:00:00.164 *********** 2025-06-22 12:00:04.131560 | orchestrator | ok: [testbed-manager] 2025-06-22 12:00:04.131579 | orchestrator | 2025-06-22 12:00:04.131598 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-22 12:00:04.131618 | orchestrator | Sunday 22 June 2025 11:58:55 +0000 (0:00:00.731) 0:00:00.895 *********** 2025-06-22 12:00:04.131638 | orchestrator | ok: [testbed-manager] 2025-06-22 12:00:04.131658 | orchestrator | 2025-06-22 12:00:04.131677 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-22 12:00:04.131690 | orchestrator | Sunday 22 June 2025 11:58:56 +0000 (0:00:00.573) 0:00:01.468 *********** 2025-06-22 12:00:04.131701 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-22 12:00:04.131712 | orchestrator | 2025-06-22 12:00:04.131723 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-22 12:00:04.131735 | orchestrator | Sunday 22 June 2025 11:58:56 +0000 (0:00:00.763) 0:00:02.232 *********** 2025-06-22 12:00:04.131746 | orchestrator | changed: [testbed-manager] 2025-06-22 12:00:04.131756 | orchestrator | 2025-06-22 12:00:04.131767 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-22 12:00:04.131778 | orchestrator | Sunday 22 June 2025 11:58:57 +0000 (0:00:01.020) 0:00:03.252 *********** 2025-06-22 12:00:04.131789 | orchestrator | changed: [testbed-manager] 2025-06-22 12:00:04.131800 | orchestrator | 2025-06-22 12:00:04.131811 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-22 12:00:04.131822 | orchestrator | Sunday 22 June 2025 11:58:58 +0000 (0:00:00.727) 0:00:03.980 *********** 2025-06-22 12:00:04.131832 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 12:00:04.131843 | orchestrator | 2025-06-22 12:00:04.131854 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-22 12:00:04.131865 | orchestrator | Sunday 22 June 2025 11:59:00 +0000 (0:00:01.453) 0:00:05.433 *********** 2025-06-22 12:00:04.131876 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 12:00:04.131887 | orchestrator | 2025-06-22 12:00:04.131898 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-22 12:00:04.131937 | orchestrator | Sunday 22 June 2025 11:59:00 +0000 (0:00:00.722) 0:00:06.156 *********** 2025-06-22 12:00:04.131949 | orchestrator | ok: [testbed-manager] 2025-06-22 12:00:04.131960 | orchestrator | 2025-06-22 12:00:04.131971 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-22 12:00:04.131982 | orchestrator | Sunday 22 June 2025 11:59:01 +0000 (0:00:00.355) 0:00:06.511 *********** 2025-06-22 12:00:04.131993 | orchestrator | ok: [testbed-manager] 2025-06-22 12:00:04.132004 | orchestrator | 2025-06-22 12:00:04.132015 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:00:04.132026 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:00:04.132037 | orchestrator | 2025-06-22 12:00:04.132048 | orchestrator | 2025-06-22 12:00:04.132059 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:00:04.132070 | orchestrator | Sunday 22 June 2025 11:59:01 +0000 (0:00:00.266) 0:00:06.778 *********** 2025-06-22 12:00:04.132082 | orchestrator | =============================================================================== 2025-06-22 12:00:04.132093 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.45s 2025-06-22 12:00:04.132103 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.02s 2025-06-22 12:00:04.132114 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.76s 2025-06-22 12:00:04.132142 | orchestrator | Get home directory of operator user ------------------------------------- 0.73s 2025-06-22 12:00:04.132153 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.73s 2025-06-22 12:00:04.132164 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.72s 2025-06-22 12:00:04.132175 | orchestrator | Create .kube directory -------------------------------------------------- 0.57s 2025-06-22 12:00:04.132186 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.36s 2025-06-22 12:00:04.132197 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.27s 2025-06-22 12:00:04.132208 | orchestrator | 2025-06-22 12:00:04.132218 | orchestrator | 2025-06-22 12:00:04.132229 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-06-22 12:00:04.132240 | orchestrator | 2025-06-22 12:00:04.132250 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-22 12:00:04.132261 | orchestrator | Sunday 22 June 2025 11:57:42 +0000 (0:00:00.143) 0:00:00.143 *********** 2025-06-22 12:00:04.132272 | orchestrator | ok: [localhost] => { 2025-06-22 12:00:04.132284 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-06-22 12:00:04.132295 | orchestrator | } 2025-06-22 12:00:04.132306 | orchestrator | 2025-06-22 12:00:04.132317 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-06-22 12:00:04.132328 | orchestrator | Sunday 22 June 2025 11:57:42 +0000 (0:00:00.101) 0:00:00.245 *********** 2025-06-22 12:00:04.132340 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-06-22 12:00:04.132352 | orchestrator | ...ignoring 2025-06-22 12:00:04.132363 | orchestrator | 2025-06-22 12:00:04.132374 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-06-22 12:00:04.132385 | orchestrator | Sunday 22 June 2025 11:57:45 +0000 (0:00:03.318) 0:00:03.563 *********** 2025-06-22 12:00:04.132395 | orchestrator | skipping: [localhost] 2025-06-22 12:00:04.132406 | orchestrator | 2025-06-22 12:00:04.132417 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-06-22 12:00:04.132428 | orchestrator | Sunday 22 June 2025 11:57:46 +0000 (0:00:00.066) 0:00:03.630 *********** 2025-06-22 12:00:04.132464 | orchestrator | ok: [localhost] 2025-06-22 12:00:04.132475 | orchestrator | 2025-06-22 12:00:04.132487 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:00:04.132505 | orchestrator | 2025-06-22 12:00:04.132516 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:00:04.132527 | orchestrator | Sunday 22 June 2025 11:57:46 +0000 (0:00:00.162) 0:00:03.792 *********** 2025-06-22 12:00:04.132538 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:00:04.132549 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:00:04.132560 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:00:04.132571 | orchestrator | 2025-06-22 12:00:04.132582 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:00:04.132594 | orchestrator | Sunday 22 June 2025 11:57:46 +0000 (0:00:00.425) 0:00:04.218 *********** 2025-06-22 12:00:04.132614 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-06-22 12:00:04.132632 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-06-22 12:00:04.132652 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-06-22 12:00:04.132672 | orchestrator | 2025-06-22 12:00:04.132691 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-06-22 12:00:04.132710 | orchestrator | 2025-06-22 12:00:04.132723 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-22 12:00:04.132734 | orchestrator | Sunday 22 June 2025 11:57:47 +0000 (0:00:01.188) 0:00:05.407 *********** 2025-06-22 12:00:04.132744 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:00:04.132755 | orchestrator | 2025-06-22 12:00:04.132766 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-22 12:00:04.132777 | orchestrator | Sunday 22 June 2025 11:57:49 +0000 (0:00:01.365) 0:00:06.773 *********** 2025-06-22 12:00:04.132787 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:00:04.132798 | orchestrator | 2025-06-22 12:00:04.132808 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-06-22 12:00:04.132819 | orchestrator | Sunday 22 June 2025 11:57:50 +0000 (0:00:01.743) 0:00:08.517 *********** 2025-06-22 12:00:04.132830 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:00:04.132840 | orchestrator | 2025-06-22 12:00:04.132857 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-06-22 12:00:04.132873 | orchestrator | Sunday 22 June 2025 11:57:51 +0000 (0:00:00.415) 0:00:08.932 *********** 2025-06-22 12:00:04.132891 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:00:04.132909 | orchestrator | 2025-06-22 12:00:04.132926 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-06-22 12:00:04.132942 | orchestrator | Sunday 22 June 2025 11:57:51 +0000 (0:00:00.467) 0:00:09.400 *********** 2025-06-22 12:00:04.132959 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:00:04.132976 | orchestrator | 2025-06-22 12:00:04.132994 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-06-22 12:00:04.133012 | orchestrator | Sunday 22 June 2025 11:57:52 +0000 (0:00:00.431) 0:00:09.831 *********** 2025-06-22 12:00:04.133029 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:00:04.133046 | orchestrator | 2025-06-22 12:00:04.133067 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-22 12:00:04.133084 | orchestrator | Sunday 22 June 2025 11:57:53 +0000 (0:00:00.892) 0:00:10.724 *********** 2025-06-22 12:00:04.133103 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:00:04.133114 | orchestrator | 2025-06-22 12:00:04.133125 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-22 12:00:04.133149 | orchestrator | Sunday 22 June 2025 11:57:54 +0000 (0:00:01.388) 0:00:12.112 *********** 2025-06-22 12:00:04.133168 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:00:04.133186 | orchestrator | 2025-06-22 12:00:04.133204 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-06-22 12:00:04.133229 | orchestrator | Sunday 22 June 2025 11:57:55 +0000 (0:00:01.055) 0:00:13.168 *********** 2025-06-22 12:00:04.133253 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:00:04.133287 | orchestrator | 2025-06-22 12:00:04.133307 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-06-22 12:00:04.133324 | orchestrator | Sunday 22 June 2025 11:57:55 +0000 (0:00:00.372) 0:00:13.541 *********** 2025-06-22 12:00:04.133342 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:00:04.133354 | orchestrator | 2025-06-22 12:00:04.133364 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-06-22 12:00:04.133375 | orchestrator | Sunday 22 June 2025 11:57:56 +0000 (0:00:00.376) 0:00:13.918 *********** 2025-06-22 12:00:04.133391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 12:00:04.133409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 12:00:04.133429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 12:00:04.133471 | orchestrator | 2025-06-22 12:00:04.133483 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-06-22 12:00:04.133494 | orchestrator | Sunday 22 June 2025 11:57:57 +0000 (0:00:00.948) 0:00:14.866 *********** 2025-06-22 12:00:04.133525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 12:00:04.133538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 12:00:04.133557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 12:00:04.133569 | orchestrator | 2025-06-22 12:00:04.133579 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-06-22 12:00:04.133590 | orchestrator | Sunday 22 June 2025 11:57:59 +0000 (0:00:02.433) 0:00:17.299 *********** 2025-06-22 12:00:04.133601 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-22 12:00:04.133612 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-22 12:00:04.133622 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-22 12:00:04.133639 | orchestrator | 2025-06-22 12:00:04.133658 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-06-22 12:00:04.133688 | orchestrator | Sunday 22 June 2025 11:58:01 +0000 (0:00:02.088) 0:00:19.387 *********** 2025-06-22 12:00:04.133708 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-22 12:00:04.133728 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-22 12:00:04.133747 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-22 12:00:04.133766 | orchestrator | 2025-06-22 12:00:04.133784 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-06-22 12:00:04.133796 | orchestrator | Sunday 22 June 2025 11:58:04 +0000 (0:00:03.051) 0:00:22.439 *********** 2025-06-22 12:00:04.133806 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-22 12:00:04.133817 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-22 12:00:04.133828 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-22 12:00:04.133838 | orchestrator | 2025-06-22 12:00:04.133849 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-06-22 12:00:04.133860 | orchestrator | Sunday 22 June 2025 11:58:06 +0000 (0:00:01.673) 0:00:24.113 *********** 2025-06-22 12:00:04.133870 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-22 12:00:04.133881 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-22 12:00:04.133892 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-22 12:00:04.133902 | orchestrator | 2025-06-22 12:00:04.133913 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-06-22 12:00:04.133923 | orchestrator | Sunday 22 June 2025 11:58:08 +0000 (0:00:02.427) 0:00:26.541 *********** 2025-06-22 12:00:04.133934 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-22 12:00:04.133945 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-22 12:00:04.133955 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-22 12:00:04.133966 | orchestrator | 2025-06-22 12:00:04.133976 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-06-22 12:00:04.133987 | orchestrator | Sunday 22 June 2025 11:58:10 +0000 (0:00:01.786) 0:00:28.327 *********** 2025-06-22 12:00:04.133998 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-22 12:00:04.134009 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-22 12:00:04.134077 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-22 12:00:04.134091 | orchestrator | 2025-06-22 12:00:04.134102 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-22 12:00:04.134113 | orchestrator | Sunday 22 June 2025 11:58:12 +0000 (0:00:01.662) 0:00:29.990 *********** 2025-06-22 12:00:04.134124 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:00:04.134134 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:00:04.134145 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:00:04.134155 | orchestrator | 2025-06-22 12:00:04.134166 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-06-22 12:00:04.134177 | orchestrator | Sunday 22 June 2025 11:58:12 +0000 (0:00:00.376) 0:00:30.367 *********** 2025-06-22 12:00:04.134266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 12:00:04.134312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 12:00:04.134327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 12:00:04.134339 | orchestrator | 2025-06-22 12:00:04.134350 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-06-22 12:00:04.134361 | orchestrator | Sunday 22 June 2025 11:58:14 +0000 (0:00:02.033) 0:00:32.400 *********** 2025-06-22 12:00:04.134371 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:00:04.134382 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:00:04.134393 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:00:04.134403 | orchestrator | 2025-06-22 12:00:04.134414 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-06-22 12:00:04.134424 | orchestrator | Sunday 22 June 2025 11:58:15 +0000 (0:00:00.930) 0:00:33.330 *********** 2025-06-22 12:00:04.134505 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:00:04.134521 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:00:04.134532 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:00:04.134543 | orchestrator | 2025-06-22 12:00:04.134555 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-06-22 12:00:04.134566 | orchestrator | Sunday 22 June 2025 11:58:23 +0000 (0:00:08.245) 0:00:41.575 *********** 2025-06-22 12:00:04.134585 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:00:04.134596 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:00:04.134608 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:00:04.134619 | orchestrator | 2025-06-22 12:00:04.134630 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-22 12:00:04.134642 | orchestrator | 2025-06-22 12:00:04.134653 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-22 12:00:04.134664 | orchestrator | Sunday 22 June 2025 11:58:25 +0000 (0:00:01.235) 0:00:42.810 *********** 2025-06-22 12:00:04.134675 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:00:04.134693 | orchestrator | 2025-06-22 12:00:04.134712 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-22 12:00:04.134731 | orchestrator | Sunday 22 June 2025 11:58:26 +0000 (0:00:00.799) 0:00:43.610 *********** 2025-06-22 12:00:04.134751 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:00:04.134767 | orchestrator | 2025-06-22 12:00:04.134778 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-22 12:00:04.134789 | orchestrator | Sunday 22 June 2025 11:58:26 +0000 (0:00:00.366) 0:00:43.977 *********** 2025-06-22 12:00:04.134799 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:00:04.134810 | orchestrator | 2025-06-22 12:00:04.134821 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-22 12:00:04.134831 | orchestrator | Sunday 22 June 2025 11:58:33 +0000 (0:00:06.905) 0:00:50.883 *********** 2025-06-22 12:00:04.134842 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:00:04.134853 | orchestrator | 2025-06-22 12:00:04.134870 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-22 12:00:04.134881 | orchestrator | 2025-06-22 12:00:04.134892 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-22 12:00:04.134903 | orchestrator | Sunday 22 June 2025 11:59:22 +0000 (0:00:49.546) 0:01:40.429 *********** 2025-06-22 12:00:04.134913 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:00:04.134924 | orchestrator | 2025-06-22 12:00:04.134935 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-22 12:00:04.134945 | orchestrator | Sunday 22 June 2025 11:59:23 +0000 (0:00:00.668) 0:01:41.098 *********** 2025-06-22 12:00:04.134956 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:00:04.134967 | orchestrator | 2025-06-22 12:00:04.134978 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-22 12:00:04.134988 | orchestrator | Sunday 22 June 2025 11:59:23 +0000 (0:00:00.482) 0:01:41.581 *********** 2025-06-22 12:00:04.134997 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:00:04.135007 | orchestrator | 2025-06-22 12:00:04.135016 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-22 12:00:04.135026 | orchestrator | Sunday 22 June 2025 11:59:31 +0000 (0:00:07.105) 0:01:48.686 *********** 2025-06-22 12:00:04.135035 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:00:04.135044 | orchestrator | 2025-06-22 12:00:04.135054 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-22 12:00:04.135063 | orchestrator | 2025-06-22 12:00:04.135073 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-22 12:00:04.135089 | orchestrator | Sunday 22 June 2025 11:59:41 +0000 (0:00:10.265) 0:01:58.952 *********** 2025-06-22 12:00:04.135099 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:00:04.135109 | orchestrator | 2025-06-22 12:00:04.135118 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-22 12:00:04.135128 | orchestrator | Sunday 22 June 2025 11:59:41 +0000 (0:00:00.618) 0:01:59.570 *********** 2025-06-22 12:00:04.135137 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:00:04.135146 | orchestrator | 2025-06-22 12:00:04.135156 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-22 12:00:04.135165 | orchestrator | Sunday 22 June 2025 11:59:42 +0000 (0:00:00.306) 0:01:59.877 *********** 2025-06-22 12:00:04.135175 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:00:04.135194 | orchestrator | 2025-06-22 12:00:04.135204 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-22 12:00:04.135213 | orchestrator | Sunday 22 June 2025 11:59:44 +0000 (0:00:01.991) 0:02:01.868 *********** 2025-06-22 12:00:04.135223 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:00:04.135232 | orchestrator | 2025-06-22 12:00:04.135242 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-06-22 12:00:04.135251 | orchestrator | 2025-06-22 12:00:04.135261 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-06-22 12:00:04.135270 | orchestrator | Sunday 22 June 2025 12:00:00 +0000 (0:00:15.801) 0:02:17.670 *********** 2025-06-22 12:00:04.135280 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:00:04.135289 | orchestrator | 2025-06-22 12:00:04.135298 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-06-22 12:00:04.135308 | orchestrator | Sunday 22 June 2025 12:00:00 +0000 (0:00:00.659) 0:02:18.329 *********** 2025-06-22 12:00:04.135317 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-22 12:00:04.135326 | orchestrator | enable_outward_rabbitmq_True 2025-06-22 12:00:04.135336 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-22 12:00:04.135345 | orchestrator | outward_rabbitmq_restart 2025-06-22 12:00:04.135355 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:00:04.135364 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:00:04.135374 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:00:04.135383 | orchestrator | 2025-06-22 12:00:04.135392 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-06-22 12:00:04.135402 | orchestrator | skipping: no hosts matched 2025-06-22 12:00:04.135411 | orchestrator | 2025-06-22 12:00:04.135420 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-06-22 12:00:04.135430 | orchestrator | skipping: no hosts matched 2025-06-22 12:00:04.135472 | orchestrator | 2025-06-22 12:00:04.135484 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-06-22 12:00:04.135493 | orchestrator | skipping: no hosts matched 2025-06-22 12:00:04.135502 | orchestrator | 2025-06-22 12:00:04.135512 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:00:04.135522 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-22 12:00:04.135533 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 12:00:04.135543 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 12:00:04.135552 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 12:00:04.135562 | orchestrator | 2025-06-22 12:00:04.135572 | orchestrator | 2025-06-22 12:00:04.135581 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:00:04.135591 | orchestrator | Sunday 22 June 2025 12:00:03 +0000 (0:00:02.660) 0:02:20.990 *********** 2025-06-22 12:00:04.135601 | orchestrator | =============================================================================== 2025-06-22 12:00:04.135610 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 75.61s 2025-06-22 12:00:04.135619 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 16.00s 2025-06-22 12:00:04.135629 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.25s 2025-06-22 12:00:04.135643 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.32s 2025-06-22 12:00:04.135653 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.05s 2025-06-22 12:00:04.135662 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.66s 2025-06-22 12:00:04.135679 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.43s 2025-06-22 12:00:04.135688 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.43s 2025-06-22 12:00:04.135697 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.09s 2025-06-22 12:00:04.135707 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.09s 2025-06-22 12:00:04.135716 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.03s 2025-06-22 12:00:04.135732 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.79s 2025-06-22 12:00:04.135750 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.74s 2025-06-22 12:00:04.135767 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.67s 2025-06-22 12:00:04.135785 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.66s 2025-06-22 12:00:04.135811 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.39s 2025-06-22 12:00:04.135827 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.37s 2025-06-22 12:00:04.135837 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 1.24s 2025-06-22 12:00:04.135847 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.19s 2025-06-22 12:00:04.135857 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.16s 2025-06-22 12:00:04.135867 | orchestrator | 2025-06-22 12:00:04 | INFO  | Task aa42d9be-35aa-4ca7-8093-31d4706838c4 is in state SUCCESS 2025-06-22 12:00:04.135877 | orchestrator | 2025-06-22 12:00:04 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:04.135887 | orchestrator | 2025-06-22 12:00:04 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:04.135896 | orchestrator | 2025-06-22 12:00:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:07.163821 | orchestrator | 2025-06-22 12:00:07 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:07.164252 | orchestrator | 2025-06-22 12:00:07 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:07.165660 | orchestrator | 2025-06-22 12:00:07 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:07.165696 | orchestrator | 2025-06-22 12:00:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:10.195794 | orchestrator | 2025-06-22 12:00:10 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:10.196212 | orchestrator | 2025-06-22 12:00:10 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:10.198399 | orchestrator | 2025-06-22 12:00:10 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:10.198424 | orchestrator | 2025-06-22 12:00:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:13.241088 | orchestrator | 2025-06-22 12:00:13 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:13.242951 | orchestrator | 2025-06-22 12:00:13 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:13.244384 | orchestrator | 2025-06-22 12:00:13 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:13.244611 | orchestrator | 2025-06-22 12:00:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:16.293869 | orchestrator | 2025-06-22 12:00:16 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:16.293968 | orchestrator | 2025-06-22 12:00:16 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:16.296078 | orchestrator | 2025-06-22 12:00:16 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:16.296345 | orchestrator | 2025-06-22 12:00:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:19.348755 | orchestrator | 2025-06-22 12:00:19 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:19.354769 | orchestrator | 2025-06-22 12:00:19 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:19.358753 | orchestrator | 2025-06-22 12:00:19 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:19.358783 | orchestrator | 2025-06-22 12:00:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:22.406857 | orchestrator | 2025-06-22 12:00:22 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:22.406955 | orchestrator | 2025-06-22 12:00:22 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:22.406968 | orchestrator | 2025-06-22 12:00:22 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:22.406978 | orchestrator | 2025-06-22 12:00:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:25.440682 | orchestrator | 2025-06-22 12:00:25 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:25.441962 | orchestrator | 2025-06-22 12:00:25 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:25.444538 | orchestrator | 2025-06-22 12:00:25 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:25.444573 | orchestrator | 2025-06-22 12:00:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:28.495605 | orchestrator | 2025-06-22 12:00:28 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:28.497534 | orchestrator | 2025-06-22 12:00:28 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:28.499811 | orchestrator | 2025-06-22 12:00:28 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:28.499902 | orchestrator | 2025-06-22 12:00:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:31.541346 | orchestrator | 2025-06-22 12:00:31 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:31.542571 | orchestrator | 2025-06-22 12:00:31 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:31.544875 | orchestrator | 2025-06-22 12:00:31 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:31.544905 | orchestrator | 2025-06-22 12:00:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:34.596128 | orchestrator | 2025-06-22 12:00:34 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:34.598560 | orchestrator | 2025-06-22 12:00:34 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:34.600319 | orchestrator | 2025-06-22 12:00:34 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:34.600381 | orchestrator | 2025-06-22 12:00:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:37.654437 | orchestrator | 2025-06-22 12:00:37 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:37.658358 | orchestrator | 2025-06-22 12:00:37 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:37.658890 | orchestrator | 2025-06-22 12:00:37 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:37.658951 | orchestrator | 2025-06-22 12:00:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:40.706673 | orchestrator | 2025-06-22 12:00:40 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:40.710163 | orchestrator | 2025-06-22 12:00:40 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:40.711066 | orchestrator | 2025-06-22 12:00:40 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:40.711094 | orchestrator | 2025-06-22 12:00:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:43.752567 | orchestrator | 2025-06-22 12:00:43 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:43.753819 | orchestrator | 2025-06-22 12:00:43 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:43.755578 | orchestrator | 2025-06-22 12:00:43 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:43.755629 | orchestrator | 2025-06-22 12:00:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:46.814885 | orchestrator | 2025-06-22 12:00:46 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:46.816689 | orchestrator | 2025-06-22 12:00:46 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:46.818517 | orchestrator | 2025-06-22 12:00:46 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:46.819131 | orchestrator | 2025-06-22 12:00:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:49.869574 | orchestrator | 2025-06-22 12:00:49 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:49.871495 | orchestrator | 2025-06-22 12:00:49 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:49.873878 | orchestrator | 2025-06-22 12:00:49 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:49.873904 | orchestrator | 2025-06-22 12:00:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:52.919936 | orchestrator | 2025-06-22 12:00:52 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:52.921158 | orchestrator | 2025-06-22 12:00:52 | INFO  | Task c0ad1b6d-7cb3-4083-9fc0-7398488b5742 is in state STARTED 2025-06-22 12:00:52.924069 | orchestrator | 2025-06-22 12:00:52 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:52.926279 | orchestrator | 2025-06-22 12:00:52 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:52.926482 | orchestrator | 2025-06-22 12:00:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:55.972333 | orchestrator | 2025-06-22 12:00:55 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:55.973986 | orchestrator | 2025-06-22 12:00:55 | INFO  | Task c0ad1b6d-7cb3-4083-9fc0-7398488b5742 is in state STARTED 2025-06-22 12:00:55.974810 | orchestrator | 2025-06-22 12:00:55 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:55.978104 | orchestrator | 2025-06-22 12:00:55 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:55.978147 | orchestrator | 2025-06-22 12:00:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:00:59.004928 | orchestrator | 2025-06-22 12:00:59 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:00:59.006227 | orchestrator | 2025-06-22 12:00:59 | INFO  | Task c0ad1b6d-7cb3-4083-9fc0-7398488b5742 is in state STARTED 2025-06-22 12:00:59.006797 | orchestrator | 2025-06-22 12:00:59 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:00:59.008484 | orchestrator | 2025-06-22 12:00:59 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:00:59.008508 | orchestrator | 2025-06-22 12:00:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:02.047651 | orchestrator | 2025-06-22 12:01:02 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:02.049642 | orchestrator | 2025-06-22 12:01:02 | INFO  | Task c0ad1b6d-7cb3-4083-9fc0-7398488b5742 is in state STARTED 2025-06-22 12:01:02.051616 | orchestrator | 2025-06-22 12:01:02 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:01:02.053173 | orchestrator | 2025-06-22 12:01:02 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:02.053622 | orchestrator | 2025-06-22 12:01:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:05.111005 | orchestrator | 2025-06-22 12:01:05 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:05.112104 | orchestrator | 2025-06-22 12:01:05 | INFO  | Task c0ad1b6d-7cb3-4083-9fc0-7398488b5742 is in state STARTED 2025-06-22 12:01:05.114999 | orchestrator | 2025-06-22 12:01:05 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:01:05.116489 | orchestrator | 2025-06-22 12:01:05 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:05.116786 | orchestrator | 2025-06-22 12:01:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:08.156192 | orchestrator | 2025-06-22 12:01:08 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:08.156290 | orchestrator | 2025-06-22 12:01:08 | INFO  | Task c0ad1b6d-7cb3-4083-9fc0-7398488b5742 is in state STARTED 2025-06-22 12:01:08.156305 | orchestrator | 2025-06-22 12:01:08 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:01:08.156907 | orchestrator | 2025-06-22 12:01:08 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:08.156934 | orchestrator | 2025-06-22 12:01:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:11.207885 | orchestrator | 2025-06-22 12:01:11 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:11.208007 | orchestrator | 2025-06-22 12:01:11 | INFO  | Task c0ad1b6d-7cb3-4083-9fc0-7398488b5742 is in state SUCCESS 2025-06-22 12:01:11.209766 | orchestrator | 2025-06-22 12:01:11 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state STARTED 2025-06-22 12:01:11.211684 | orchestrator | 2025-06-22 12:01:11 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:11.211885 | orchestrator | 2025-06-22 12:01:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:14.260260 | orchestrator | 2025-06-22 12:01:14 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:14.264347 | orchestrator | 2025-06-22 12:01:14 | INFO  | Task 73fd2f0a-4ac4-4f03-a114-21ed7d77c11a is in state SUCCESS 2025-06-22 12:01:14.265797 | orchestrator | 2025-06-22 12:01:14.265835 | orchestrator | None 2025-06-22 12:01:14.265848 | orchestrator | 2025-06-22 12:01:14.265860 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:01:14.265872 | orchestrator | 2025-06-22 12:01:14.265981 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:01:14.266071 | orchestrator | Sunday 22 June 2025 11:58:37 +0000 (0:00:00.231) 0:00:00.231 *********** 2025-06-22 12:01:14.266084 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:01:14.266096 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:01:14.266139 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:01:14.266151 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:01:14.266161 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:01:14.266172 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:01:14.266183 | orchestrator | 2025-06-22 12:01:14.266194 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:01:14.266205 | orchestrator | Sunday 22 June 2025 11:58:37 +0000 (0:00:00.747) 0:00:00.979 *********** 2025-06-22 12:01:14.266216 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-06-22 12:01:14.266227 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-06-22 12:01:14.266238 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-06-22 12:01:14.266249 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-06-22 12:01:14.266260 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-06-22 12:01:14.266270 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-06-22 12:01:14.266281 | orchestrator | 2025-06-22 12:01:14.266292 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-06-22 12:01:14.266303 | orchestrator | 2025-06-22 12:01:14.266314 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-06-22 12:01:14.266325 | orchestrator | Sunday 22 June 2025 11:58:39 +0000 (0:00:01.497) 0:00:02.476 *********** 2025-06-22 12:01:14.266337 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:01:14.266349 | orchestrator | 2025-06-22 12:01:14.266360 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-06-22 12:01:14.266401 | orchestrator | Sunday 22 June 2025 11:58:41 +0000 (0:00:02.497) 0:00:04.974 *********** 2025-06-22 12:01:14.266418 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266435 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266447 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266538 | orchestrator | 2025-06-22 12:01:14.266551 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-06-22 12:01:14.266564 | orchestrator | Sunday 22 June 2025 11:58:44 +0000 (0:00:02.456) 0:00:07.430 *********** 2025-06-22 12:01:14.266577 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266590 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266604 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266655 | orchestrator | 2025-06-22 12:01:14.266667 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-06-22 12:01:14.266680 | orchestrator | Sunday 22 June 2025 11:58:46 +0000 (0:00:02.072) 0:00:09.503 *********** 2025-06-22 12:01:14.266706 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266719 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266738 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266783 | orchestrator | 2025-06-22 12:01:14.266794 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-06-22 12:01:14.266805 | orchestrator | Sunday 22 June 2025 11:58:48 +0000 (0:00:02.035) 0:00:11.539 *********** 2025-06-22 12:01:14.266845 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266859 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.266952 | orchestrator | 2025-06-22 12:01:14.266970 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-06-22 12:01:14.266988 | orchestrator | Sunday 22 June 2025 11:58:50 +0000 (0:00:01.734) 0:00:13.273 *********** 2025-06-22 12:01:14.267007 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.267024 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.267036 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.267047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.267058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.267077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.267088 | orchestrator | 2025-06-22 12:01:14.267132 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-06-22 12:01:14.267144 | orchestrator | Sunday 22 June 2025 11:58:51 +0000 (0:00:01.503) 0:00:14.777 *********** 2025-06-22 12:01:14.267155 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:01:14.267167 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:01:14.267183 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:01:14.267194 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:01:14.267205 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:01:14.267216 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:01:14.267226 | orchestrator | 2025-06-22 12:01:14.267237 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-06-22 12:01:14.267248 | orchestrator | Sunday 22 June 2025 11:58:54 +0000 (0:00:03.203) 0:00:17.981 *********** 2025-06-22 12:01:14.267259 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-06-22 12:01:14.267270 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-06-22 12:01:14.267281 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-06-22 12:01:14.267298 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-06-22 12:01:14.267310 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 12:01:14.267321 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-06-22 12:01:14.267331 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-06-22 12:01:14.267342 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 12:01:14.267353 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 12:01:14.267363 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 12:01:14.267422 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 12:01:14.267437 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 12:01:14.267448 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 12:01:14.267459 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 12:01:14.267472 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 12:01:14.267493 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 12:01:14.267514 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 12:01:14.267538 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 12:01:14.267550 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 12:01:14.267561 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 12:01:14.267572 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 12:01:14.267583 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 12:01:14.267594 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 12:01:14.267605 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 12:01:14.267616 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 12:01:14.267627 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 12:01:14.267638 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 12:01:14.267649 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 12:01:14.267660 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 12:01:14.267671 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 12:01:14.267682 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 12:01:14.267693 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 12:01:14.267704 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-22 12:01:14.267715 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 12:01:14.267726 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 12:01:14.267743 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 12:01:14.267755 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 12:01:14.267766 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-22 12:01:14.267777 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-06-22 12:01:14.267788 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-22 12:01:14.267806 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-22 12:01:14.267818 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-22 12:01:14.267829 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-22 12:01:14.267840 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-06-22 12:01:14.267851 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-22 12:01:14.267862 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-06-22 12:01:14.267880 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-06-22 12:01:14.267891 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-06-22 12:01:14.267902 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-06-22 12:01:14.267913 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-22 12:01:14.267924 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-22 12:01:14.267935 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-22 12:01:14.267946 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-22 12:01:14.267957 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-22 12:01:14.267967 | orchestrator | 2025-06-22 12:01:14.267979 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 12:01:14.267990 | orchestrator | Sunday 22 June 2025 11:59:15 +0000 (0:00:21.170) 0:00:39.151 *********** 2025-06-22 12:01:14.268001 | orchestrator | 2025-06-22 12:01:14.268012 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 12:01:14.268023 | orchestrator | Sunday 22 June 2025 11:59:16 +0000 (0:00:00.065) 0:00:39.217 *********** 2025-06-22 12:01:14.268033 | orchestrator | 2025-06-22 12:01:14.268044 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 12:01:14.268055 | orchestrator | Sunday 22 June 2025 11:59:16 +0000 (0:00:00.066) 0:00:39.284 *********** 2025-06-22 12:01:14.268066 | orchestrator | 2025-06-22 12:01:14.268077 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 12:01:14.268088 | orchestrator | Sunday 22 June 2025 11:59:16 +0000 (0:00:00.066) 0:00:39.350 *********** 2025-06-22 12:01:14.268099 | orchestrator | 2025-06-22 12:01:14.268117 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 12:01:14.268136 | orchestrator | Sunday 22 June 2025 11:59:16 +0000 (0:00:00.066) 0:00:39.417 *********** 2025-06-22 12:01:14.268155 | orchestrator | 2025-06-22 12:01:14.268167 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 12:01:14.268177 | orchestrator | Sunday 22 June 2025 11:59:16 +0000 (0:00:00.068) 0:00:39.486 *********** 2025-06-22 12:01:14.268188 | orchestrator | 2025-06-22 12:01:14.268199 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-06-22 12:01:14.268210 | orchestrator | Sunday 22 June 2025 11:59:16 +0000 (0:00:00.063) 0:00:39.549 *********** 2025-06-22 12:01:14.268221 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:01:14.268232 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:01:14.268243 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:01:14.268254 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:01:14.268264 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:01:14.268275 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:01:14.268286 | orchestrator | 2025-06-22 12:01:14.268297 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-06-22 12:01:14.268308 | orchestrator | Sunday 22 June 2025 11:59:18 +0000 (0:00:02.586) 0:00:42.136 *********** 2025-06-22 12:01:14.268319 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:01:14.268330 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:01:14.268341 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:01:14.268351 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:01:14.268362 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:01:14.268421 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:01:14.268453 | orchestrator | 2025-06-22 12:01:14.268469 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-06-22 12:01:14.268480 | orchestrator | 2025-06-22 12:01:14.268491 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-22 12:01:14.268502 | orchestrator | Sunday 22 June 2025 11:59:53 +0000 (0:00:34.809) 0:01:16.945 *********** 2025-06-22 12:01:14.268513 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:01:14.268524 | orchestrator | 2025-06-22 12:01:14.268535 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-22 12:01:14.268545 | orchestrator | Sunday 22 June 2025 11:59:54 +0000 (0:00:00.684) 0:01:17.630 *********** 2025-06-22 12:01:14.268557 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:01:14.268567 | orchestrator | 2025-06-22 12:01:14.268585 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-06-22 12:01:14.268597 | orchestrator | Sunday 22 June 2025 11:59:55 +0000 (0:00:01.032) 0:01:18.663 *********** 2025-06-22 12:01:14.268608 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:01:14.268619 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:01:14.268629 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:01:14.268640 | orchestrator | 2025-06-22 12:01:14.268651 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-06-22 12:01:14.268662 | orchestrator | Sunday 22 June 2025 11:59:56 +0000 (0:00:00.888) 0:01:19.551 *********** 2025-06-22 12:01:14.268672 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:01:14.268683 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:01:14.268694 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:01:14.268704 | orchestrator | 2025-06-22 12:01:14.268715 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-06-22 12:01:14.268726 | orchestrator | Sunday 22 June 2025 11:59:56 +0000 (0:00:00.315) 0:01:19.866 *********** 2025-06-22 12:01:14.268737 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:01:14.268748 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:01:14.268758 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:01:14.268769 | orchestrator | 2025-06-22 12:01:14.268780 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-06-22 12:01:14.268790 | orchestrator | Sunday 22 June 2025 11:59:57 +0000 (0:00:00.353) 0:01:20.219 *********** 2025-06-22 12:01:14.268801 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:01:14.268812 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:01:14.268823 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:01:14.268833 | orchestrator | 2025-06-22 12:01:14.268844 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-06-22 12:01:14.268855 | orchestrator | Sunday 22 June 2025 11:59:57 +0000 (0:00:00.521) 0:01:20.741 *********** 2025-06-22 12:01:14.268866 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:01:14.268877 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:01:14.268887 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:01:14.268898 | orchestrator | 2025-06-22 12:01:14.268909 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-06-22 12:01:14.268920 | orchestrator | Sunday 22 June 2025 11:59:57 +0000 (0:00:00.323) 0:01:21.065 *********** 2025-06-22 12:01:14.268931 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.268941 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.268952 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.268963 | orchestrator | 2025-06-22 12:01:14.268974 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-06-22 12:01:14.268984 | orchestrator | Sunday 22 June 2025 11:59:58 +0000 (0:00:00.294) 0:01:21.359 *********** 2025-06-22 12:01:14.268995 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.269005 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.269016 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.269027 | orchestrator | 2025-06-22 12:01:14.269038 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-06-22 12:01:14.269055 | orchestrator | Sunday 22 June 2025 11:59:58 +0000 (0:00:00.294) 0:01:21.654 *********** 2025-06-22 12:01:14.269066 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.269077 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.269087 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.269098 | orchestrator | 2025-06-22 12:01:14.269109 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-06-22 12:01:14.269119 | orchestrator | Sunday 22 June 2025 11:59:58 +0000 (0:00:00.474) 0:01:22.129 *********** 2025-06-22 12:01:14.269130 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.269141 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.269151 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.269162 | orchestrator | 2025-06-22 12:01:14.269172 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-06-22 12:01:14.269183 | orchestrator | Sunday 22 June 2025 11:59:59 +0000 (0:00:00.300) 0:01:22.429 *********** 2025-06-22 12:01:14.269194 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.269204 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.269215 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.269226 | orchestrator | 2025-06-22 12:01:14.269237 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-06-22 12:01:14.269247 | orchestrator | Sunday 22 June 2025 11:59:59 +0000 (0:00:00.288) 0:01:22.717 *********** 2025-06-22 12:01:14.269258 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.269269 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.269279 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.269290 | orchestrator | 2025-06-22 12:01:14.269301 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-06-22 12:01:14.269311 | orchestrator | Sunday 22 June 2025 11:59:59 +0000 (0:00:00.308) 0:01:23.026 *********** 2025-06-22 12:01:14.269322 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.269332 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.269343 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.269354 | orchestrator | 2025-06-22 12:01:14.269365 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-06-22 12:01:14.269396 | orchestrator | Sunday 22 June 2025 12:00:00 +0000 (0:00:00.583) 0:01:23.610 *********** 2025-06-22 12:01:14.269416 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.269436 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.269463 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.269476 | orchestrator | 2025-06-22 12:01:14.269487 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-06-22 12:01:14.269497 | orchestrator | Sunday 22 June 2025 12:00:00 +0000 (0:00:00.300) 0:01:23.910 *********** 2025-06-22 12:01:14.269508 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.269519 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.269529 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.269540 | orchestrator | 2025-06-22 12:01:14.269550 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-06-22 12:01:14.269561 | orchestrator | Sunday 22 June 2025 12:00:01 +0000 (0:00:00.317) 0:01:24.228 *********** 2025-06-22 12:01:14.269572 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.269583 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.269593 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.269604 | orchestrator | 2025-06-22 12:01:14.269621 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-06-22 12:01:14.269633 | orchestrator | Sunday 22 June 2025 12:00:01 +0000 (0:00:00.305) 0:01:24.533 *********** 2025-06-22 12:01:14.269643 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.269654 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.269665 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.269675 | orchestrator | 2025-06-22 12:01:14.269686 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-06-22 12:01:14.269704 | orchestrator | Sunday 22 June 2025 12:00:01 +0000 (0:00:00.502) 0:01:25.035 *********** 2025-06-22 12:01:14.269715 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.269726 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.269736 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.269747 | orchestrator | 2025-06-22 12:01:14.269757 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-22 12:01:14.269768 | orchestrator | Sunday 22 June 2025 12:00:02 +0000 (0:00:00.318) 0:01:25.354 *********** 2025-06-22 12:01:14.269779 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:01:14.269790 | orchestrator | 2025-06-22 12:01:14.269801 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-06-22 12:01:14.269811 | orchestrator | Sunday 22 June 2025 12:00:02 +0000 (0:00:00.578) 0:01:25.933 *********** 2025-06-22 12:01:14.269822 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:01:14.269833 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:01:14.269844 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:01:14.269854 | orchestrator | 2025-06-22 12:01:14.269865 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-06-22 12:01:14.269876 | orchestrator | Sunday 22 June 2025 12:00:03 +0000 (0:00:01.108) 0:01:27.041 *********** 2025-06-22 12:01:14.269887 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:01:14.269897 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:01:14.269908 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:01:14.269918 | orchestrator | 2025-06-22 12:01:14.269929 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-06-22 12:01:14.269940 | orchestrator | Sunday 22 June 2025 12:00:04 +0000 (0:00:00.474) 0:01:27.515 *********** 2025-06-22 12:01:14.269951 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.269961 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.269972 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.269983 | orchestrator | 2025-06-22 12:01:14.269993 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-06-22 12:01:14.270004 | orchestrator | Sunday 22 June 2025 12:00:04 +0000 (0:00:00.417) 0:01:27.933 *********** 2025-06-22 12:01:14.270057 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.270071 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.270082 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.270092 | orchestrator | 2025-06-22 12:01:14.270103 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-06-22 12:01:14.270114 | orchestrator | Sunday 22 June 2025 12:00:05 +0000 (0:00:00.430) 0:01:28.363 *********** 2025-06-22 12:01:14.270124 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.270135 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.270146 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.270157 | orchestrator | 2025-06-22 12:01:14.270168 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-06-22 12:01:14.270178 | orchestrator | Sunday 22 June 2025 12:00:05 +0000 (0:00:00.549) 0:01:28.912 *********** 2025-06-22 12:01:14.270189 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.270200 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.270210 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.270221 | orchestrator | 2025-06-22 12:01:14.270232 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-06-22 12:01:14.270242 | orchestrator | Sunday 22 June 2025 12:00:06 +0000 (0:00:00.300) 0:01:29.213 *********** 2025-06-22 12:01:14.270253 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.270264 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.270274 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.270285 | orchestrator | 2025-06-22 12:01:14.270296 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-06-22 12:01:14.270306 | orchestrator | Sunday 22 June 2025 12:00:06 +0000 (0:00:00.295) 0:01:29.509 *********** 2025-06-22 12:01:14.270324 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.270335 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.270345 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.270356 | orchestrator | 2025-06-22 12:01:14.270367 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-22 12:01:14.270407 | orchestrator | Sunday 22 June 2025 12:00:06 +0000 (0:00:00.325) 0:01:29.834 *********** 2025-06-22 12:01:14.270427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.270441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.270670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.270763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.270781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.270794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.270805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.270816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.270828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.270863 | orchestrator | 2025-06-22 12:01:14.270876 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-22 12:01:14.270888 | orchestrator | Sunday 22 June 2025 12:00:08 +0000 (0:00:01.358) 0:01:31.193 *********** 2025-06-22 12:01:14.270899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.270912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.270923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.270952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.270965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.270976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.270987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.270998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.271049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.271070 | orchestrator | 2025-06-22 12:01:14.271082 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-22 12:01:14.271093 | orchestrator | Sunday 22 June 2025 12:00:11 +0000 (0:00:03.903) 0:01:35.096 *********** 2025-06-22 12:01:14.271104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.271115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.271131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.271151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.271163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.271174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.271185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.271198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.271218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.271231 | orchestrator | 2025-06-22 12:01:14.271243 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 12:01:14.271256 | orchestrator | Sunday 22 June 2025 12:00:13 +0000 (0:00:02.036) 0:01:37.132 *********** 2025-06-22 12:01:14.271269 | orchestrator | 2025-06-22 12:01:14.271282 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 12:01:14.271294 | orchestrator | Sunday 22 June 2025 12:00:14 +0000 (0:00:00.069) 0:01:37.202 *********** 2025-06-22 12:01:14.271306 | orchestrator | 2025-06-22 12:01:14.271318 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 12:01:14.271330 | orchestrator | Sunday 22 June 2025 12:00:14 +0000 (0:00:00.079) 0:01:37.281 *********** 2025-06-22 12:01:14.271342 | orchestrator | 2025-06-22 12:01:14.271355 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-22 12:01:14.271367 | orchestrator | Sunday 22 June 2025 12:00:14 +0000 (0:00:00.067) 0:01:37.348 *********** 2025-06-22 12:01:14.271412 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:01:14.271433 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:01:14.271453 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:01:14.271466 | orchestrator | 2025-06-22 12:01:14.271477 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-22 12:01:14.271488 | orchestrator | Sunday 22 June 2025 12:00:21 +0000 (0:00:07.531) 0:01:44.880 *********** 2025-06-22 12:01:14.271498 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:01:14.271509 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:01:14.271520 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:01:14.271530 | orchestrator | 2025-06-22 12:01:14.271541 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-22 12:01:14.271552 | orchestrator | Sunday 22 June 2025 12:00:24 +0000 (0:00:02.957) 0:01:47.838 *********** 2025-06-22 12:01:14.271568 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:01:14.271580 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:01:14.271591 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:01:14.271602 | orchestrator | 2025-06-22 12:01:14.271612 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-22 12:01:14.271623 | orchestrator | Sunday 22 June 2025 12:00:32 +0000 (0:00:07.716) 0:01:55.555 *********** 2025-06-22 12:01:14.271634 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.271645 | orchestrator | 2025-06-22 12:01:14.271656 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-22 12:01:14.271667 | orchestrator | Sunday 22 June 2025 12:00:32 +0000 (0:00:00.136) 0:01:55.691 *********** 2025-06-22 12:01:14.271677 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:01:14.271689 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:01:14.271700 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:01:14.271710 | orchestrator | 2025-06-22 12:01:14.271729 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-22 12:01:14.271740 | orchestrator | Sunday 22 June 2025 12:00:33 +0000 (0:00:00.824) 0:01:56.516 *********** 2025-06-22 12:01:14.271751 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.271761 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.271772 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:01:14.271783 | orchestrator | 2025-06-22 12:01:14.271794 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-22 12:01:14.271805 | orchestrator | Sunday 22 June 2025 12:00:34 +0000 (0:00:00.834) 0:01:57.351 *********** 2025-06-22 12:01:14.271815 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:01:14.271833 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:01:14.271844 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:01:14.271855 | orchestrator | 2025-06-22 12:01:14.271866 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-22 12:01:14.271877 | orchestrator | Sunday 22 June 2025 12:00:34 +0000 (0:00:00.822) 0:01:58.173 *********** 2025-06-22 12:01:14.271887 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.271898 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.271909 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:01:14.271920 | orchestrator | 2025-06-22 12:01:14.271930 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-22 12:01:14.271941 | orchestrator | Sunday 22 June 2025 12:00:35 +0000 (0:00:00.618) 0:01:58.791 *********** 2025-06-22 12:01:14.271952 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:01:14.271963 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:01:14.271973 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:01:14.271987 | orchestrator | 2025-06-22 12:01:14.272005 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-22 12:01:14.272023 | orchestrator | Sunday 22 June 2025 12:00:36 +0000 (0:00:00.889) 0:01:59.681 *********** 2025-06-22 12:01:14.272041 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:01:14.272059 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:01:14.272072 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:01:14.272082 | orchestrator | 2025-06-22 12:01:14.272093 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-06-22 12:01:14.272104 | orchestrator | Sunday 22 June 2025 12:00:38 +0000 (0:00:01.912) 0:02:01.593 *********** 2025-06-22 12:01:14.272115 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:01:14.272125 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:01:14.272136 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:01:14.272147 | orchestrator | 2025-06-22 12:01:14.272158 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-22 12:01:14.272169 | orchestrator | Sunday 22 June 2025 12:00:38 +0000 (0:00:00.331) 0:02:01.924 *********** 2025-06-22 12:01:14.272180 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272192 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272203 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272215 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272232 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272250 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272269 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272281 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272292 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272303 | orchestrator | 2025-06-22 12:01:14.272314 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-22 12:01:14.272325 | orchestrator | Sunday 22 June 2025 12:00:40 +0000 (0:00:01.440) 0:02:03.365 *********** 2025-06-22 12:01:14.272337 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272348 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272359 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272370 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272463 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272497 | orchestrator | 2025-06-22 12:01:14.272508 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-22 12:01:14.272519 | orchestrator | Sunday 22 June 2025 12:00:44 +0000 (0:00:04.388) 0:02:07.753 *********** 2025-06-22 12:01:14.272530 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272541 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272572 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272595 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272659 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:01:14.272670 | orchestrator | 2025-06-22 12:01:14.272681 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 12:01:14.272692 | orchestrator | Sunday 22 June 2025 12:00:47 +0000 (0:00:03.118) 0:02:10.872 *********** 2025-06-22 12:01:14.272703 | orchestrator | 2025-06-22 12:01:14.272713 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 12:01:14.272724 | orchestrator | Sunday 22 June 2025 12:00:47 +0000 (0:00:00.068) 0:02:10.941 *********** 2025-06-22 12:01:14.272735 | orchestrator | 2025-06-22 12:01:14.272746 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 12:01:14.272757 | orchestrator | Sunday 22 June 2025 12:00:47 +0000 (0:00:00.077) 0:02:11.019 *********** 2025-06-22 12:01:14.272768 | orchestrator | 2025-06-22 12:01:14.272778 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-22 12:01:14.272789 | orchestrator | Sunday 22 June 2025 12:00:47 +0000 (0:00:00.069) 0:02:11.089 *********** 2025-06-22 12:01:14.272800 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:01:14.272810 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:01:14.272821 | orchestrator | 2025-06-22 12:01:14.272832 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-22 12:01:14.272843 | orchestrator | Sunday 22 June 2025 12:00:54 +0000 (0:00:06.479) 0:02:17.568 *********** 2025-06-22 12:01:14.272853 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:01:14.272864 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:01:14.272875 | orchestrator | 2025-06-22 12:01:14.272886 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-22 12:01:14.272896 | orchestrator | Sunday 22 June 2025 12:01:00 +0000 (0:00:06.483) 0:02:24.052 *********** 2025-06-22 12:01:14.272907 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:01:14.272918 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:01:14.272928 | orchestrator | 2025-06-22 12:01:14.272939 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-22 12:01:14.272956 | orchestrator | Sunday 22 June 2025 12:01:06 +0000 (0:00:06.039) 0:02:30.091 *********** 2025-06-22 12:01:14.272968 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:01:14.272978 | orchestrator | 2025-06-22 12:01:14.272993 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-22 12:01:14.273013 | orchestrator | Sunday 22 June 2025 12:01:07 +0000 (0:00:00.137) 0:02:30.228 *********** 2025-06-22 12:01:14.273033 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:01:14.273049 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:01:14.273060 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:01:14.273071 | orchestrator | 2025-06-22 12:01:14.273083 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-22 12:01:14.273093 | orchestrator | Sunday 22 June 2025 12:01:08 +0000 (0:00:01.058) 0:02:31.287 *********** 2025-06-22 12:01:14.273104 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.273115 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.273126 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:01:14.273137 | orchestrator | 2025-06-22 12:01:14.273148 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-22 12:01:14.273159 | orchestrator | Sunday 22 June 2025 12:01:08 +0000 (0:00:00.605) 0:02:31.892 *********** 2025-06-22 12:01:14.273170 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:01:14.273181 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:01:14.273192 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:01:14.273203 | orchestrator | 2025-06-22 12:01:14.273214 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-22 12:01:14.273225 | orchestrator | Sunday 22 June 2025 12:01:09 +0000 (0:00:00.756) 0:02:32.649 *********** 2025-06-22 12:01:14.273236 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:01:14.273247 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:01:14.273258 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:01:14.273269 | orchestrator | 2025-06-22 12:01:14.273279 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-22 12:01:14.273291 | orchestrator | Sunday 22 June 2025 12:01:10 +0000 (0:00:00.650) 0:02:33.300 *********** 2025-06-22 12:01:14.273301 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:01:14.273312 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:01:14.273323 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:01:14.273334 | orchestrator | 2025-06-22 12:01:14.273356 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-22 12:01:14.273367 | orchestrator | Sunday 22 June 2025 12:01:11 +0000 (0:00:01.208) 0:02:34.508 *********** 2025-06-22 12:01:14.273437 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:01:14.273450 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:01:14.273461 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:01:14.273471 | orchestrator | 2025-06-22 12:01:14.273482 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:01:14.273494 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-22 12:01:14.273506 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-22 12:01:14.273525 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-22 12:01:14.273536 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:01:14.273548 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:01:14.273559 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:01:14.273578 | orchestrator | 2025-06-22 12:01:14.273589 | orchestrator | 2025-06-22 12:01:14.273600 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:01:14.273611 | orchestrator | Sunday 22 June 2025 12:01:12 +0000 (0:00:00.911) 0:02:35.420 *********** 2025-06-22 12:01:14.273622 | orchestrator | =============================================================================== 2025-06-22 12:01:14.273633 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 34.81s 2025-06-22 12:01:14.273644 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.17s 2025-06-22 12:01:14.273655 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.01s 2025-06-22 12:01:14.273665 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.76s 2025-06-22 12:01:14.273676 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.44s 2025-06-22 12:01:14.273687 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.39s 2025-06-22 12:01:14.273698 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.90s 2025-06-22 12:01:14.273708 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.20s 2025-06-22 12:01:14.273719 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.12s 2025-06-22 12:01:14.273730 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.59s 2025-06-22 12:01:14.273740 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.50s 2025-06-22 12:01:14.273751 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.46s 2025-06-22 12:01:14.273762 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.07s 2025-06-22 12:01:14.273773 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.04s 2025-06-22 12:01:14.273784 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 2.04s 2025-06-22 12:01:14.273794 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.91s 2025-06-22 12:01:14.273805 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.73s 2025-06-22 12:01:14.273816 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.50s 2025-06-22 12:01:14.273827 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.50s 2025-06-22 12:01:14.273838 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.44s 2025-06-22 12:01:14.273849 | orchestrator | 2025-06-22 12:01:14 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:14.273860 | orchestrator | 2025-06-22 12:01:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:17.312681 | orchestrator | 2025-06-22 12:01:17 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:17.314076 | orchestrator | 2025-06-22 12:01:17 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:17.314321 | orchestrator | 2025-06-22 12:01:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:20.355918 | orchestrator | 2025-06-22 12:01:20 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:20.357204 | orchestrator | 2025-06-22 12:01:20 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:20.357248 | orchestrator | 2025-06-22 12:01:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:23.403980 | orchestrator | 2025-06-22 12:01:23 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:23.409970 | orchestrator | 2025-06-22 12:01:23 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:23.410004 | orchestrator | 2025-06-22 12:01:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:26.441985 | orchestrator | 2025-06-22 12:01:26 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:26.442240 | orchestrator | 2025-06-22 12:01:26 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:26.442267 | orchestrator | 2025-06-22 12:01:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:29.486748 | orchestrator | 2025-06-22 12:01:29 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:29.488115 | orchestrator | 2025-06-22 12:01:29 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:29.488146 | orchestrator | 2025-06-22 12:01:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:32.554301 | orchestrator | 2025-06-22 12:01:32 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:32.555564 | orchestrator | 2025-06-22 12:01:32 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:32.555599 | orchestrator | 2025-06-22 12:01:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:35.592750 | orchestrator | 2025-06-22 12:01:35 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:35.593843 | orchestrator | 2025-06-22 12:01:35 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:35.593882 | orchestrator | 2025-06-22 12:01:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:38.641877 | orchestrator | 2025-06-22 12:01:38 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:38.642987 | orchestrator | 2025-06-22 12:01:38 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:38.643492 | orchestrator | 2025-06-22 12:01:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:41.684626 | orchestrator | 2025-06-22 12:01:41 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:41.686344 | orchestrator | 2025-06-22 12:01:41 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:41.686398 | orchestrator | 2025-06-22 12:01:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:44.735229 | orchestrator | 2025-06-22 12:01:44 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:44.738466 | orchestrator | 2025-06-22 12:01:44 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:44.738480 | orchestrator | 2025-06-22 12:01:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:47.775119 | orchestrator | 2025-06-22 12:01:47 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:47.775500 | orchestrator | 2025-06-22 12:01:47 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:47.775533 | orchestrator | 2025-06-22 12:01:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:50.814880 | orchestrator | 2025-06-22 12:01:50 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:50.816674 | orchestrator | 2025-06-22 12:01:50 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:50.817501 | orchestrator | 2025-06-22 12:01:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:53.869278 | orchestrator | 2025-06-22 12:01:53 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:53.869471 | orchestrator | 2025-06-22 12:01:53 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:53.869520 | orchestrator | 2025-06-22 12:01:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:56.918880 | orchestrator | 2025-06-22 12:01:56 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:56.918987 | orchestrator | 2025-06-22 12:01:56 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:56.919002 | orchestrator | 2025-06-22 12:01:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:01:59.971930 | orchestrator | 2025-06-22 12:01:59 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:01:59.972749 | orchestrator | 2025-06-22 12:01:59 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:01:59.972964 | orchestrator | 2025-06-22 12:01:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:02:03.016607 | orchestrator | 2025-06-22 12:02:03 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:02:03.020037 | orchestrator | 2025-06-22 12:02:03 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:02:03.020092 | orchestrator | 2025-06-22 12:02:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:02:06.071761 | orchestrator | 2025-06-22 12:02:06 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:02:06.073254 | orchestrator | 2025-06-22 12:02:06 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:02:06.073359 | orchestrator | 2025-06-22 12:02:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:02:09.119725 | orchestrator | 2025-06-22 12:02:09 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:02:09.121978 | orchestrator | 2025-06-22 12:02:09 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:02:09.122495 | orchestrator | 2025-06-22 12:02:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:02:12.173856 | orchestrator | 2025-06-22 12:02:12 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:02:12.178170 | orchestrator | 2025-06-22 12:02:12 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:02:12.178227 | orchestrator | 2025-06-22 12:02:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:02:15.229048 | orchestrator | 2025-06-22 12:02:15 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:02:15.230264 | orchestrator | 2025-06-22 12:02:15 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:02:15.230298 | orchestrator | 2025-06-22 12:02:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:02:18.275627 | orchestrator | 2025-06-22 12:02:18 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:02:18.282487 | orchestrator | 2025-06-22 12:02:18 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:02:18.282539 | orchestrator | 2025-06-22 12:02:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:02:21.328634 | orchestrator | 2025-06-22 12:02:21 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:02:21.330690 | orchestrator | 2025-06-22 12:02:21 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:02:21.330752 | orchestrator | 2025-06-22 12:02:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:02:24.384155 | orchestrator | 2025-06-22 12:02:24 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:02:24.384295 | orchestrator | 2025-06-22 12:02:24 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:02:24.384311 | orchestrator | 2025-06-22 12:02:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:02:27.424475 | orchestrator | 2025-06-22 12:02:27 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:02:27.428598 | orchestrator | 2025-06-22 12:02:27 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:02:27.428650 | orchestrator | 2025-06-22 12:02:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:02:30.473789 | orchestrator | 2025-06-22 12:02:30 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:02:30.473892 | orchestrator | 2025-06-22 12:02:30 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:02:30.473907 | orchestrator | 2025-06-22 12:02:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:02:33.521233 | orchestrator | 2025-06-22 12:02:33 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:02:33.524563 | orchestrator | 2025-06-22 12:02:33 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:02:33.524612 | orchestrator | 2025-06-22 12:02:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:02:36.574507 | orchestrator | 2025-06-22 12:02:36 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:02:36.576648 | orchestrator | 2025-06-22 12:02:36 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:02:36.576723 | orchestrator | 2025-06-22 12:02:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:02:39.622937 | orchestrator | 2025-06-22 12:02:39 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:02:39.625006 | orchestrator | 2025-06-22 12:02:39 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:02:39.625178 | orchestrator | 2025-06-22 12:02:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:02:42.684412 | orchestrator | 2025-06-22 12:02:42 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:02:42.684525 | orchestrator | 2025-06-22 12:02:42 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:02:42.685149 | orchestrator | 2025-06-22 12:02:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:02:45.733960 | orchestrator | 2025-06-22 12:02:45 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:02:45.734782 | orchestrator | 2025-06-22 12:02:45 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:02:45.734907 | orchestrator | 2025-06-22 12:02:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:02:48.777224 | orchestrator | 2025-06-22 12:02:48 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:02:48.777850 | orchestrator | 2025-06-22 12:02:48 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:02:48.777883 | orchestrator | 2025-06-22 12:02:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:02:51.828012 | orchestrator | 2025-06-22 12:02:51 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:02:51.829429 | orchestrator | 2025-06-22 12:02:51 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:02:51.829460 | orchestrator | 2025-06-22 12:02:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:02:54.887393 | orchestrator | 2025-06-22 12:02:54 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:02:54.897689 | orchestrator | 2025-06-22 12:02:54 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:02:54.898228 | orchestrator | 2025-06-22 12:02:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:02:57.950125 | orchestrator | 2025-06-22 12:02:57 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:02:57.951019 | orchestrator | 2025-06-22 12:02:57 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:02:57.951052 | orchestrator | 2025-06-22 12:02:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:01.005999 | orchestrator | 2025-06-22 12:03:01 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:01.006270 | orchestrator | 2025-06-22 12:03:01 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:03:01.006295 | orchestrator | 2025-06-22 12:03:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:04.064104 | orchestrator | 2025-06-22 12:03:04 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:04.064533 | orchestrator | 2025-06-22 12:03:04 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:03:04.064565 | orchestrator | 2025-06-22 12:03:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:07.107432 | orchestrator | 2025-06-22 12:03:07 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:07.107830 | orchestrator | 2025-06-22 12:03:07 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:03:07.107862 | orchestrator | 2025-06-22 12:03:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:10.150701 | orchestrator | 2025-06-22 12:03:10 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:10.150803 | orchestrator | 2025-06-22 12:03:10 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:03:10.150818 | orchestrator | 2025-06-22 12:03:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:13.200015 | orchestrator | 2025-06-22 12:03:13 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:13.200184 | orchestrator | 2025-06-22 12:03:13 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:03:13.200204 | orchestrator | 2025-06-22 12:03:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:16.250414 | orchestrator | 2025-06-22 12:03:16 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:16.252261 | orchestrator | 2025-06-22 12:03:16 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:03:16.252606 | orchestrator | 2025-06-22 12:03:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:19.291004 | orchestrator | 2025-06-22 12:03:19 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:19.294517 | orchestrator | 2025-06-22 12:03:19 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:03:19.294553 | orchestrator | 2025-06-22 12:03:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:22.360219 | orchestrator | 2025-06-22 12:03:22 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:22.363953 | orchestrator | 2025-06-22 12:03:22 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:03:22.364095 | orchestrator | 2025-06-22 12:03:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:25.412612 | orchestrator | 2025-06-22 12:03:25 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:25.412919 | orchestrator | 2025-06-22 12:03:25 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:03:25.413418 | orchestrator | 2025-06-22 12:03:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:28.466833 | orchestrator | 2025-06-22 12:03:28 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:28.468532 | orchestrator | 2025-06-22 12:03:28 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:03:28.468596 | orchestrator | 2025-06-22 12:03:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:31.505492 | orchestrator | 2025-06-22 12:03:31 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:31.506001 | orchestrator | 2025-06-22 12:03:31 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:03:31.506074 | orchestrator | 2025-06-22 12:03:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:34.546552 | orchestrator | 2025-06-22 12:03:34 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:34.546678 | orchestrator | 2025-06-22 12:03:34 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:03:34.549394 | orchestrator | 2025-06-22 12:03:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:37.600305 | orchestrator | 2025-06-22 12:03:37 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:37.602651 | orchestrator | 2025-06-22 12:03:37 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:03:37.602716 | orchestrator | 2025-06-22 12:03:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:40.643039 | orchestrator | 2025-06-22 12:03:40 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:40.644852 | orchestrator | 2025-06-22 12:03:40 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:03:40.645476 | orchestrator | 2025-06-22 12:03:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:43.689618 | orchestrator | 2025-06-22 12:03:43 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:43.690226 | orchestrator | 2025-06-22 12:03:43 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:03:43.690259 | orchestrator | 2025-06-22 12:03:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:46.747589 | orchestrator | 2025-06-22 12:03:46 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:46.748970 | orchestrator | 2025-06-22 12:03:46 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:03:46.749007 | orchestrator | 2025-06-22 12:03:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:49.799489 | orchestrator | 2025-06-22 12:03:49 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:49.800544 | orchestrator | 2025-06-22 12:03:49 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:03:49.800577 | orchestrator | 2025-06-22 12:03:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:52.847800 | orchestrator | 2025-06-22 12:03:52 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:52.847928 | orchestrator | 2025-06-22 12:03:52 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:03:52.847944 | orchestrator | 2025-06-22 12:03:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:55.891900 | orchestrator | 2025-06-22 12:03:55 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:55.892005 | orchestrator | 2025-06-22 12:03:55 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state STARTED 2025-06-22 12:03:55.892022 | orchestrator | 2025-06-22 12:03:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:03:58.934798 | orchestrator | 2025-06-22 12:03:58 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:03:58.945561 | orchestrator | 2025-06-22 12:03:58 | INFO  | Task 35209b30-7b93-4c2b-8ba6-9257d9e12129 is in state SUCCESS 2025-06-22 12:03:58.948182 | orchestrator | 2025-06-22 12:03:58.948227 | orchestrator | 2025-06-22 12:03:58.948264 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:03:58.948278 | orchestrator | 2025-06-22 12:03:58.948289 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:03:58.948301 | orchestrator | Sunday 22 June 2025 11:57:27 +0000 (0:00:00.262) 0:00:00.262 *********** 2025-06-22 12:03:58.948312 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:03:58.948324 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:03:58.948335 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:03:58.948402 | orchestrator | 2025-06-22 12:03:58.948414 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:03:58.948470 | orchestrator | Sunday 22 June 2025 11:57:27 +0000 (0:00:00.347) 0:00:00.610 *********** 2025-06-22 12:03:58.948483 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-06-22 12:03:58.948494 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-06-22 12:03:58.948506 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-06-22 12:03:58.948541 | orchestrator | 2025-06-22 12:03:58.948553 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-06-22 12:03:58.948634 | orchestrator | 2025-06-22 12:03:58.948645 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-22 12:03:58.948656 | orchestrator | Sunday 22 June 2025 11:57:27 +0000 (0:00:00.419) 0:00:01.030 *********** 2025-06-22 12:03:58.948696 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.948709 | orchestrator | 2025-06-22 12:03:58.948721 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-06-22 12:03:58.948733 | orchestrator | Sunday 22 June 2025 11:57:28 +0000 (0:00:00.686) 0:00:01.716 *********** 2025-06-22 12:03:58.948744 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:03:58.948755 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:03:58.948766 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:03:58.948777 | orchestrator | 2025-06-22 12:03:58.948789 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-22 12:03:58.948800 | orchestrator | Sunday 22 June 2025 11:57:29 +0000 (0:00:00.795) 0:00:02.512 *********** 2025-06-22 12:03:58.948811 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.948822 | orchestrator | 2025-06-22 12:03:58.948833 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-06-22 12:03:58.948844 | orchestrator | Sunday 22 June 2025 11:57:30 +0000 (0:00:00.890) 0:00:03.402 *********** 2025-06-22 12:03:58.948855 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:03:58.948866 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:03:58.948877 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:03:58.948888 | orchestrator | 2025-06-22 12:03:58.948899 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-06-22 12:03:58.948934 | orchestrator | Sunday 22 June 2025 11:57:30 +0000 (0:00:00.655) 0:00:04.058 *********** 2025-06-22 12:03:58.949018 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-22 12:03:58.949031 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-22 12:03:58.949042 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-22 12:03:58.949106 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-22 12:03:58.949118 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-22 12:03:58.949130 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-22 12:03:58.949141 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-22 12:03:58.949152 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-22 12:03:58.949162 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-22 12:03:58.949173 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-22 12:03:58.949183 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-22 12:03:58.949194 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-22 12:03:58.949205 | orchestrator | 2025-06-22 12:03:58.949215 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-22 12:03:58.949241 | orchestrator | Sunday 22 June 2025 11:57:34 +0000 (0:00:03.605) 0:00:07.664 *********** 2025-06-22 12:03:58.949253 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-22 12:03:58.949264 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-22 12:03:58.949275 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-22 12:03:58.949285 | orchestrator | 2025-06-22 12:03:58.949296 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-22 12:03:58.949307 | orchestrator | Sunday 22 June 2025 11:57:35 +0000 (0:00:01.023) 0:00:08.688 *********** 2025-06-22 12:03:58.949318 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-22 12:03:58.949329 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-22 12:03:58.949339 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-22 12:03:58.949350 | orchestrator | 2025-06-22 12:03:58.949387 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-22 12:03:58.949399 | orchestrator | Sunday 22 June 2025 11:57:37 +0000 (0:00:02.190) 0:00:10.878 *********** 2025-06-22 12:03:58.949410 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-06-22 12:03:58.949420 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.949591 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-06-22 12:03:58.949603 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.949614 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-06-22 12:03:58.949625 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.949636 | orchestrator | 2025-06-22 12:03:58.949646 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-06-22 12:03:58.949657 | orchestrator | Sunday 22 June 2025 11:57:38 +0000 (0:00:01.060) 0:00:11.938 *********** 2025-06-22 12:03:58.949672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 12:03:58.949699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 12:03:58.949741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 12:03:58.949755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 12:03:58.949774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 12:03:58.949794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 12:03:58.949807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 12:03:58.949820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 12:03:58.949870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 12:03:58.949884 | orchestrator | 2025-06-22 12:03:58.949895 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-06-22 12:03:58.949907 | orchestrator | Sunday 22 June 2025 11:57:41 +0000 (0:00:02.504) 0:00:14.443 *********** 2025-06-22 12:03:58.949917 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.949928 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.949939 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.949950 | orchestrator | 2025-06-22 12:03:58.949961 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-06-22 12:03:58.949972 | orchestrator | Sunday 22 June 2025 11:57:42 +0000 (0:00:01.044) 0:00:15.489 *********** 2025-06-22 12:03:58.949983 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-06-22 12:03:58.949993 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-06-22 12:03:58.950004 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-06-22 12:03:58.950089 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-06-22 12:03:58.950184 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-06-22 12:03:58.950196 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-06-22 12:03:58.950207 | orchestrator | 2025-06-22 12:03:58.950218 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-06-22 12:03:58.950229 | orchestrator | Sunday 22 June 2025 11:57:44 +0000 (0:00:02.165) 0:00:17.654 *********** 2025-06-22 12:03:58.950250 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.950262 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.950272 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.950283 | orchestrator | 2025-06-22 12:03:58.950294 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-06-22 12:03:58.950305 | orchestrator | Sunday 22 June 2025 11:57:46 +0000 (0:00:01.533) 0:00:19.188 *********** 2025-06-22 12:03:58.950316 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:03:58.950327 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:03:58.950337 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:03:58.950449 | orchestrator | 2025-06-22 12:03:58.950464 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-06-22 12:03:58.950475 | orchestrator | Sunday 22 June 2025 11:57:48 +0000 (0:00:02.474) 0:00:21.662 *********** 2025-06-22 12:03:58.950503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.950526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.950551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.950563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.950575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.950587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ae0edaf420014e6904041a4c9c4abc3a55212c12', '__omit_place_holder__ae0edaf420014e6904041a4c9c4abc3a55212c12'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 12:03:58.950604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.950623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ae0edaf420014e6904041a4c9c4abc3a55212c12', '__omit_place_holder__ae0edaf420014e6904041a4c9c4abc3a55212c12'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 12:03:58.950642 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.950653 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.950665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.950677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.950688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.950700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ae0edaf420014e6904041a4c9c4abc3a55212c12', '__omit_place_holder__ae0edaf420014e6904041a4c9c4abc3a55212c12'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 12:03:58.950712 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.950723 | orchestrator | 2025-06-22 12:03:58.950734 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-06-22 12:03:58.950773 | orchestrator | Sunday 22 June 2025 11:57:49 +0000 (0:00:01.213) 0:00:22.876 *********** 2025-06-22 12:03:58.950793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 12:03:58.950818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 12:03:58.950892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 12:03:58.950905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 12:03:58.950916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 12:03:58.950928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.950940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.950952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ae0edaf420014e6904041a4c9c4abc3a55212c12', '__omit_place_holder__ae0edaf420014e6904041a4c9c4abc3a55212c12'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 12:03:58.951079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ae0edaf420014e6904041a4c9c4abc3a55212c12', '__omit_place_holder__ae0edaf420014e6904041a4c9c4abc3a55212c12'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 12:03:58.951096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 12:03:58.951108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.951120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ae0edaf420014e6904041a4c9c4abc3a55212c12', '__omit_place_holder__ae0edaf420014e6904041a4c9c4abc3a55212c12'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 12:03:58.951131 | orchestrator | 2025-06-22 12:03:58.951142 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-06-22 12:03:58.951153 | orchestrator | Sunday 22 June 2025 11:57:53 +0000 (0:00:03.765) 0:00:26.641 *********** 2025-06-22 12:03:58.951164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 12:03:58.951188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 12:03:58.951209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 12:03:58.951221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 12:03:58.951233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 12:03:58.951244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 12:03:58.951283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 12:03:58.951307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 12:03:58.951324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 12:03:58.951336 | orchestrator | 2025-06-22 12:03:58.951347 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-06-22 12:03:58.951529 | orchestrator | Sunday 22 June 2025 11:57:57 +0000 (0:00:04.347) 0:00:30.989 *********** 2025-06-22 12:03:58.951547 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-22 12:03:58.951567 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-22 12:03:58.951578 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-22 12:03:58.951627 | orchestrator | 2025-06-22 12:03:58.951637 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-06-22 12:03:58.951660 | orchestrator | Sunday 22 June 2025 11:58:00 +0000 (0:00:02.512) 0:00:33.502 *********** 2025-06-22 12:03:58.951670 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-22 12:03:58.951680 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-22 12:03:58.951690 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-22 12:03:58.951699 | orchestrator | 2025-06-22 12:03:58.951709 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-06-22 12:03:58.951718 | orchestrator | Sunday 22 June 2025 11:58:05 +0000 (0:00:04.620) 0:00:38.123 *********** 2025-06-22 12:03:58.951728 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.951737 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.951747 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.951756 | orchestrator | 2025-06-22 12:03:58.951766 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-06-22 12:03:58.951775 | orchestrator | Sunday 22 June 2025 11:58:05 +0000 (0:00:00.822) 0:00:38.945 *********** 2025-06-22 12:03:58.951785 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-22 12:03:58.951797 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-22 12:03:58.951807 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-22 12:03:58.951816 | orchestrator | 2025-06-22 12:03:58.951826 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-06-22 12:03:58.951835 | orchestrator | Sunday 22 June 2025 11:58:09 +0000 (0:00:03.790) 0:00:42.735 *********** 2025-06-22 12:03:58.951845 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-22 12:03:58.951854 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-22 12:03:58.951875 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-22 12:03:58.951884 | orchestrator | 2025-06-22 12:03:58.951894 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-06-22 12:03:58.951904 | orchestrator | Sunday 22 June 2025 11:58:11 +0000 (0:00:01.837) 0:00:44.573 *********** 2025-06-22 12:03:58.951913 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-06-22 12:03:58.951923 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-06-22 12:03:58.951932 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-06-22 12:03:58.951942 | orchestrator | 2025-06-22 12:03:58.951951 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-06-22 12:03:58.951961 | orchestrator | Sunday 22 June 2025 11:58:12 +0000 (0:00:01.312) 0:00:45.885 *********** 2025-06-22 12:03:58.951970 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-06-22 12:03:58.951980 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-06-22 12:03:58.951989 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-06-22 12:03:58.951999 | orchestrator | 2025-06-22 12:03:58.952008 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-22 12:03:58.952018 | orchestrator | Sunday 22 June 2025 11:58:14 +0000 (0:00:02.174) 0:00:48.060 *********** 2025-06-22 12:03:58.952027 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.952092 | orchestrator | 2025-06-22 12:03:58.952157 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-06-22 12:03:58.952168 | orchestrator | Sunday 22 June 2025 11:58:16 +0000 (0:00:01.123) 0:00:49.183 *********** 2025-06-22 12:03:58.952184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 12:03:58.952202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 12:03:58.952213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 12:03:58.952223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 12:03:58.952241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 12:03:58.952251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 12:03:58.952269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 12:03:58.952280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 12:03:58.952297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 12:03:58.952307 | orchestrator | 2025-06-22 12:03:58.952317 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-06-22 12:03:58.952327 | orchestrator | Sunday 22 June 2025 11:58:19 +0000 (0:00:03.635) 0:00:52.818 *********** 2025-06-22 12:03:58.952337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.952353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.952390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.952406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.952428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.952444 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.952473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.952521 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.952532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.952549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.952560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.952570 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.952579 | orchestrator | 2025-06-22 12:03:58.952589 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-06-22 12:03:58.952599 | orchestrator | Sunday 22 June 2025 11:58:20 +0000 (0:00:00.751) 0:00:53.570 *********** 2025-06-22 12:03:58.952609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.952624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.952640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.952651 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.952661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.952676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.952687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.952697 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.952707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.952758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.952775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.952785 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.952830 | orchestrator | 2025-06-22 12:03:58.952841 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-22 12:03:58.952850 | orchestrator | Sunday 22 June 2025 11:58:22 +0000 (0:00:01.759) 0:00:55.329 *********** 2025-06-22 12:03:58.952867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.952884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.952894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.952904 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.952914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.952924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.952939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.952955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.952971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.952981 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.952991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.953001 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.953011 | orchestrator | 2025-06-22 12:03:58.953021 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-22 12:03:58.953030 | orchestrator | Sunday 22 June 2025 11:58:23 +0000 (0:00:01.544) 0:00:56.873 *********** 2025-06-22 12:03:58.953040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.953050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.953064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.953075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.953090 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.953119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.953130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.953203 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.953214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.953225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.953271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.953282 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.953292 | orchestrator | 2025-06-22 12:03:58.953302 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-22 12:03:58.953312 | orchestrator | Sunday 22 June 2025 11:58:25 +0000 (0:00:02.114) 0:00:58.987 *********** 2025-06-22 12:03:58.953327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.953352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.953385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.953396 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.953406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.953416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.953426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.953436 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.953450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.953476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.953487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.953564 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.953575 | orchestrator | 2025-06-22 12:03:58.953585 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-06-22 12:03:58.953594 | orchestrator | Sunday 22 June 2025 11:58:27 +0000 (0:00:01.248) 0:01:00.236 *********** 2025-06-22 12:03:58.953604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.953615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.953625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.953635 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.953650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.953666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.953684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.953695 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.953705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.953715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.953725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.953735 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.953745 | orchestrator | 2025-06-22 12:03:58.953755 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-06-22 12:03:58.953765 | orchestrator | Sunday 22 June 2025 11:58:27 +0000 (0:00:00.618) 0:01:00.855 *********** 2025-06-22 12:03:58.953780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.953795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.953812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.953823 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.953833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.953843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.953853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.953863 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.953873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.953949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.953962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.953983 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.953993 | orchestrator | 2025-06-22 12:03:58.954003 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-06-22 12:03:58.954095 | orchestrator | Sunday 22 June 2025 11:58:28 +0000 (0:00:00.578) 0:01:01.433 *********** 2025-06-22 12:03:58.954111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.954121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.954152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.954172 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.954221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.954231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.954247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.954257 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.954273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 12:03:58.954284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 12:03:58.954294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 12:03:58.954304 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.954314 | orchestrator | 2025-06-22 12:03:58.954324 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-06-22 12:03:58.954340 | orchestrator | Sunday 22 June 2025 11:58:29 +0000 (0:00:01.085) 0:01:02.518 *********** 2025-06-22 12:03:58.954350 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-22 12:03:58.954418 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-22 12:03:58.954443 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-22 12:03:58.954453 | orchestrator | 2025-06-22 12:03:58.954463 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-06-22 12:03:58.954472 | orchestrator | Sunday 22 June 2025 11:58:31 +0000 (0:00:01.584) 0:01:04.103 *********** 2025-06-22 12:03:58.954482 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-22 12:03:58.954492 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-22 12:03:58.954501 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-22 12:03:58.954511 | orchestrator | 2025-06-22 12:03:58.954520 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-06-22 12:03:58.954566 | orchestrator | Sunday 22 June 2025 11:58:32 +0000 (0:00:01.326) 0:01:05.430 *********** 2025-06-22 12:03:58.954577 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 12:03:58.954587 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 12:03:58.954596 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 12:03:58.954606 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 12:03:58.954616 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.954626 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 12:03:58.954641 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.954651 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 12:03:58.954660 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.954670 | orchestrator | 2025-06-22 12:03:58.954680 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-06-22 12:03:58.954689 | orchestrator | Sunday 22 June 2025 11:58:34 +0000 (0:00:02.422) 0:01:07.853 *********** 2025-06-22 12:03:58.954706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 12:03:58.954718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 12:03:58.954734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 12:03:58.954801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 12:03:58.954811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 12:03:58.954824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 12:03:58.954833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 12:03:58.954846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 12:03:58.954855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 12:03:58.954870 | orchestrator | 2025-06-22 12:03:58.954878 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-06-22 12:03:58.954886 | orchestrator | Sunday 22 June 2025 11:58:38 +0000 (0:00:03.426) 0:01:11.279 *********** 2025-06-22 12:03:58.954894 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.954902 | orchestrator | 2025-06-22 12:03:58.954910 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-06-22 12:03:58.954918 | orchestrator | Sunday 22 June 2025 11:58:39 +0000 (0:00:01.354) 0:01:12.634 *********** 2025-06-22 12:03:58.954928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-22 12:03:58.954937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 12:03:58.954946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.954954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.954967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-22 12:03:58.954982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 12:03:58.954990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-22 12:03:58.955055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 12:03:58.955111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955135 | orchestrator | 2025-06-22 12:03:58.955143 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-06-22 12:03:58.955151 | orchestrator | Sunday 22 June 2025 11:58:45 +0000 (0:00:05.907) 0:01:18.541 *********** 2025-06-22 12:03:58.955194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-22 12:03:58.955203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 12:03:58.955211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955232 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.955252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-22 12:03:58.955262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-22 12:03:58.955270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 12:03:58.955278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 12:03:58.955290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955382 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.955425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955435 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.955443 | orchestrator | 2025-06-22 12:03:58.955474 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-06-22 12:03:58.955484 | orchestrator | Sunday 22 June 2025 11:58:46 +0000 (0:00:00.970) 0:01:19.512 *********** 2025-06-22 12:03:58.955493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-22 12:03:58.955501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-22 12:03:58.955510 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.955518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-22 12:03:58.955526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-22 12:03:58.955534 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.955542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-22 12:03:58.955550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-22 12:03:58.955558 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.955567 | orchestrator | 2025-06-22 12:03:58.955575 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-06-22 12:03:58.955583 | orchestrator | Sunday 22 June 2025 11:58:47 +0000 (0:00:01.408) 0:01:20.920 *********** 2025-06-22 12:03:58.955591 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.955598 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.955606 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.955614 | orchestrator | 2025-06-22 12:03:58.955622 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-06-22 12:03:58.955630 | orchestrator | Sunday 22 June 2025 11:58:49 +0000 (0:00:01.521) 0:01:22.442 *********** 2025-06-22 12:03:58.955644 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.955686 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.955694 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.955702 | orchestrator | 2025-06-22 12:03:58.955710 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-06-22 12:03:58.955718 | orchestrator | Sunday 22 June 2025 11:58:51 +0000 (0:00:02.125) 0:01:24.568 *********** 2025-06-22 12:03:58.955731 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.955739 | orchestrator | 2025-06-22 12:03:58.955747 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-06-22 12:03:58.955754 | orchestrator | Sunday 22 June 2025 11:58:52 +0000 (0:00:01.342) 0:01:25.911 *********** 2025-06-22 12:03:58.955770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.955780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.955797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.955860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955877 | orchestrator | 2025-06-22 12:03:58.955885 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-06-22 12:03:58.955893 | orchestrator | Sunday 22 June 2025 11:58:59 +0000 (0:00:07.003) 0:01:32.914 *********** 2025-06-22 12:03:58.955901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.955919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955940 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.955949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.955957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.955989 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.956020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.956035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.956044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.956052 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.956060 | orchestrator | 2025-06-22 12:03:58.956068 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-06-22 12:03:58.956076 | orchestrator | Sunday 22 June 2025 11:59:00 +0000 (0:00:00.612) 0:01:33.527 *********** 2025-06-22 12:03:58.956084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 12:03:58.956093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 12:03:58.956101 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.956109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 12:03:58.956117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 12:03:58.956134 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.956142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 12:03:58.956150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 12:03:58.956158 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.956166 | orchestrator | 2025-06-22 12:03:58.956174 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-06-22 12:03:58.956183 | orchestrator | Sunday 22 June 2025 11:59:01 +0000 (0:00:00.825) 0:01:34.353 *********** 2025-06-22 12:03:58.956191 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.956199 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.956207 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.956215 | orchestrator | 2025-06-22 12:03:58.956223 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-06-22 12:03:58.956230 | orchestrator | Sunday 22 June 2025 11:59:03 +0000 (0:00:01.808) 0:01:36.161 *********** 2025-06-22 12:03:58.956238 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.956246 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.956254 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.956262 | orchestrator | 2025-06-22 12:03:58.956270 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-06-22 12:03:58.956278 | orchestrator | Sunday 22 June 2025 11:59:05 +0000 (0:00:02.037) 0:01:38.198 *********** 2025-06-22 12:03:58.956286 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.956294 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.956302 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.956310 | orchestrator | 2025-06-22 12:03:58.956318 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-06-22 12:03:58.956329 | orchestrator | Sunday 22 June 2025 11:59:05 +0000 (0:00:00.329) 0:01:38.528 *********** 2025-06-22 12:03:58.956337 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.956345 | orchestrator | 2025-06-22 12:03:58.956353 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-06-22 12:03:58.956381 | orchestrator | Sunday 22 June 2025 11:59:06 +0000 (0:00:00.651) 0:01:39.180 *********** 2025-06-22 12:03:58.956396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-22 12:03:58.956405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-22 12:03:58.956419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-22 12:03:58.956427 | orchestrator | 2025-06-22 12:03:58.956435 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-06-22 12:03:58.956443 | orchestrator | Sunday 22 June 2025 11:59:09 +0000 (0:00:03.094) 0:01:42.274 *********** 2025-06-22 12:03:58.956451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-22 12:03:58.956460 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.956471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-22 12:03:58.956480 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.956494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-22 12:03:58.956508 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.956516 | orchestrator | 2025-06-22 12:03:58.956549 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-06-22 12:03:58.956557 | orchestrator | Sunday 22 June 2025 11:59:10 +0000 (0:00:01.617) 0:01:43.892 *********** 2025-06-22 12:03:58.956565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 12:03:58.956576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 12:03:58.956585 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.956593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 12:03:58.956602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 12:03:58.956610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 12:03:58.956641 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.956654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 12:03:58.956662 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.956670 | orchestrator | 2025-06-22 12:03:58.956678 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-06-22 12:03:58.956686 | orchestrator | Sunday 22 June 2025 11:59:12 +0000 (0:00:01.770) 0:01:45.663 *********** 2025-06-22 12:03:58.956694 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.956701 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.956750 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.956758 | orchestrator | 2025-06-22 12:03:58.956766 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-06-22 12:03:58.956773 | orchestrator | Sunday 22 June 2025 11:59:13 +0000 (0:00:01.002) 0:01:46.665 *********** 2025-06-22 12:03:58.956781 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.956789 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.956797 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.956810 | orchestrator | 2025-06-22 12:03:58.956869 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-06-22 12:03:58.956883 | orchestrator | Sunday 22 June 2025 11:59:14 +0000 (0:00:01.038) 0:01:47.703 *********** 2025-06-22 12:03:58.956892 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.956900 | orchestrator | 2025-06-22 12:03:58.956908 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-06-22 12:03:58.956916 | orchestrator | Sunday 22 June 2025 11:59:15 +0000 (0:00:00.999) 0:01:48.703 *********** 2025-06-22 12:03:58.956924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.956933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.956942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.956954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.956968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.956982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.956990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.956999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.957020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957054 | orchestrator | 2025-06-22 12:03:58.957063 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-06-22 12:03:58.957071 | orchestrator | Sunday 22 June 2025 11:59:19 +0000 (0:00:03.580) 0:01:52.284 *********** 2025-06-22 12:03:58.957079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.957087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957126 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.957134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.957143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957177 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.957191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.957200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957224 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.957272 | orchestrator | 2025-06-22 12:03:58.957281 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-06-22 12:03:58.957297 | orchestrator | Sunday 22 June 2025 11:59:20 +0000 (0:00:01.548) 0:01:53.833 *********** 2025-06-22 12:03:58.957306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 12:03:58.957318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 12:03:58.957327 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.957335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 12:03:58.957343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 12:03:58.957351 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.957415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 12:03:58.957425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 12:03:58.957433 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.957441 | orchestrator | 2025-06-22 12:03:58.957449 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-06-22 12:03:58.957457 | orchestrator | Sunday 22 June 2025 11:59:21 +0000 (0:00:01.030) 0:01:54.863 *********** 2025-06-22 12:03:58.957465 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.957472 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.957480 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.957488 | orchestrator | 2025-06-22 12:03:58.957496 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-06-22 12:03:58.957504 | orchestrator | Sunday 22 June 2025 11:59:23 +0000 (0:00:01.302) 0:01:56.165 *********** 2025-06-22 12:03:58.957512 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.957519 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.957527 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.957535 | orchestrator | 2025-06-22 12:03:58.957543 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-06-22 12:03:58.957551 | orchestrator | Sunday 22 June 2025 11:59:25 +0000 (0:00:02.205) 0:01:58.371 *********** 2025-06-22 12:03:58.957559 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.957566 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.957574 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.957582 | orchestrator | 2025-06-22 12:03:58.957590 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-06-22 12:03:58.957598 | orchestrator | Sunday 22 June 2025 11:59:25 +0000 (0:00:00.613) 0:01:58.985 *********** 2025-06-22 12:03:58.957606 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.957613 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.957621 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.957629 | orchestrator | 2025-06-22 12:03:58.957637 | orchestrator | TASK [include_role : designate] ************************************************ 2025-06-22 12:03:58.957645 | orchestrator | Sunday 22 June 2025 11:59:26 +0000 (0:00:00.297) 0:01:59.283 *********** 2025-06-22 12:03:58.957653 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.957661 | orchestrator | 2025-06-22 12:03:58.957669 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-06-22 12:03:58.957682 | orchestrator | Sunday 22 June 2025 11:59:26 +0000 (0:00:00.794) 0:02:00.078 *********** 2025-06-22 12:03:58.957690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 12:03:58.957701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 12:03:58.957709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 12:03:58.957748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 12:03:58.957766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 12:03:58.957809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 12:03:58.957830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957880 | orchestrator | 2025-06-22 12:03:58.957887 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-06-22 12:03:58.957894 | orchestrator | Sunday 22 June 2025 11:59:30 +0000 (0:00:03.831) 0:02:03.909 *********** 2025-06-22 12:03:58.957904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 12:03:58.957912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 12:03:58.957923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957962 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.957973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 12:03:58.957985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 12:03:58.957992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.957999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.958006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 12:03:58.958042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.958052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 12:03:58.958063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.958070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.958092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.958099 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.958107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.958119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.958132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.958144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.958151 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.958158 | orchestrator | 2025-06-22 12:03:58.958165 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-06-22 12:03:58.958171 | orchestrator | Sunday 22 June 2025 11:59:31 +0000 (0:00:00.820) 0:02:04.730 *********** 2025-06-22 12:03:58.958178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-22 12:03:58.958185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-22 12:03:58.958192 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.958199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-22 12:03:58.958205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-22 12:03:58.958212 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.958219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-22 12:03:58.958226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-22 12:03:58.958232 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.958239 | orchestrator | 2025-06-22 12:03:58.958246 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-06-22 12:03:58.958252 | orchestrator | Sunday 22 June 2025 11:59:32 +0000 (0:00:00.935) 0:02:05.665 *********** 2025-06-22 12:03:58.958259 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.958266 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.958272 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.958279 | orchestrator | 2025-06-22 12:03:58.958285 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-06-22 12:03:58.958292 | orchestrator | Sunday 22 June 2025 11:59:34 +0000 (0:00:01.875) 0:02:07.541 *********** 2025-06-22 12:03:58.958299 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.958305 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.958312 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.958318 | orchestrator | 2025-06-22 12:03:58.958325 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-06-22 12:03:58.958332 | orchestrator | Sunday 22 June 2025 11:59:36 +0000 (0:00:02.064) 0:02:09.605 *********** 2025-06-22 12:03:58.958338 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.958345 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.958352 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.958378 | orchestrator | 2025-06-22 12:03:58.958394 | orchestrator | TASK [include_role : glance] *************************************************** 2025-06-22 12:03:58.958405 | orchestrator | Sunday 22 June 2025 11:59:36 +0000 (0:00:00.319) 0:02:09.925 *********** 2025-06-22 12:03:58.958423 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.958429 | orchestrator | 2025-06-22 12:03:58.958436 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-06-22 12:03:58.958443 | orchestrator | Sunday 22 June 2025 11:59:37 +0000 (0:00:00.770) 0:02:10.695 *********** 2025-06-22 12:03:58.958464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 12:03:58.958474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 12:03:58.958494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 12:03:58.958503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 12:03:58.958519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 12:03:58.958531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 12:03:58.958539 | orchestrator | 2025-06-22 12:03:58.958546 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-06-22 12:03:58.958553 | orchestrator | Sunday 22 June 2025 11:59:41 +0000 (0:00:04.158) 0:02:14.854 *********** 2025-06-22 12:03:58.958568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 12:03:58.958580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 12:03:58.958591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 12:03:58.958603 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.958615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 12:03:58.958622 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.958633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 12:03:58.958650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 12:03:58.958658 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.958665 | orchestrator | 2025-06-22 12:03:58.958672 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-06-22 12:03:58.958679 | orchestrator | Sunday 22 June 2025 11:59:44 +0000 (0:00:02.942) 0:02:17.796 *********** 2025-06-22 12:03:58.958686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 12:03:58.958707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 12:03:58.958715 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.958722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 12:03:58.958739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 12:03:58.958747 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.958754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 12:03:58.958765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 12:03:58.958772 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.958779 | orchestrator | 2025-06-22 12:03:58.958786 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-06-22 12:03:58.958793 | orchestrator | Sunday 22 June 2025 11:59:47 +0000 (0:00:03.168) 0:02:20.965 *********** 2025-06-22 12:03:58.958799 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.958806 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.958812 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.958819 | orchestrator | 2025-06-22 12:03:58.958826 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-06-22 12:03:58.958832 | orchestrator | Sunday 22 June 2025 11:59:49 +0000 (0:00:01.637) 0:02:22.602 *********** 2025-06-22 12:03:58.958839 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.958845 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.958852 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.958859 | orchestrator | 2025-06-22 12:03:58.958865 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-06-22 12:03:58.958872 | orchestrator | Sunday 22 June 2025 11:59:51 +0000 (0:00:02.024) 0:02:24.626 *********** 2025-06-22 12:03:58.958878 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.958885 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.958891 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.958898 | orchestrator | 2025-06-22 12:03:58.958905 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-06-22 12:03:58.958911 | orchestrator | Sunday 22 June 2025 11:59:51 +0000 (0:00:00.311) 0:02:24.938 *********** 2025-06-22 12:03:58.958918 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.958925 | orchestrator | 2025-06-22 12:03:58.958931 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-06-22 12:03:58.958938 | orchestrator | Sunday 22 June 2025 11:59:52 +0000 (0:00:00.817) 0:02:25.755 *********** 2025-06-22 12:03:58.958952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 12:03:58.958959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 12:03:58.958969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 12:03:58.958977 | orchestrator | 2025-06-22 12:03:58.958983 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-06-22 12:03:58.958990 | orchestrator | Sunday 22 June 2025 11:59:56 +0000 (0:00:04.294) 0:02:30.049 *********** 2025-06-22 12:03:58.959001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 12:03:58.959008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 12:03:58.959015 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.959022 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.959029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 12:03:58.959040 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.959046 | orchestrator | 2025-06-22 12:03:58.959053 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-06-22 12:03:58.959059 | orchestrator | Sunday 22 June 2025 11:59:57 +0000 (0:00:00.428) 0:02:30.477 *********** 2025-06-22 12:03:58.959066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-22 12:03:58.959073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-22 12:03:58.959079 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.959086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-22 12:03:58.959093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-22 12:03:58.959099 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.959106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-22 12:03:58.959116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-22 12:03:58.959123 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.959129 | orchestrator | 2025-06-22 12:03:58.959136 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-06-22 12:03:58.959142 | orchestrator | Sunday 22 June 2025 11:59:58 +0000 (0:00:00.677) 0:02:31.155 *********** 2025-06-22 12:03:58.959149 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.959156 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.959162 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.959169 | orchestrator | 2025-06-22 12:03:58.959175 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-06-22 12:03:58.959182 | orchestrator | Sunday 22 June 2025 11:59:59 +0000 (0:00:01.783) 0:02:32.938 *********** 2025-06-22 12:03:58.959189 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.959195 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.959202 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.959209 | orchestrator | 2025-06-22 12:03:58.959219 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-06-22 12:03:58.959226 | orchestrator | Sunday 22 June 2025 12:00:01 +0000 (0:00:02.114) 0:02:35.053 *********** 2025-06-22 12:03:58.959233 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.959239 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.959246 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.959252 | orchestrator | 2025-06-22 12:03:58.959259 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-06-22 12:03:58.959266 | orchestrator | Sunday 22 June 2025 12:00:02 +0000 (0:00:00.312) 0:02:35.365 *********** 2025-06-22 12:03:58.959276 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.959283 | orchestrator | 2025-06-22 12:03:58.959290 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-06-22 12:03:58.959296 | orchestrator | Sunday 22 June 2025 12:00:03 +0000 (0:00:00.973) 0:02:36.339 *********** 2025-06-22 12:03:58.959304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 12:03:58.959320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 12:03:58.959334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 12:03:58.959342 | orchestrator | 2025-06-22 12:03:58.959349 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-06-22 12:03:58.959373 | orchestrator | Sunday 22 June 2025 12:00:07 +0000 (0:00:03.867) 0:02:40.207 *********** 2025-06-22 12:03:58.959398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 12:03:58.959418 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.959429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 12:03:58.959437 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.959450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 12:03:58.959462 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.959468 | orchestrator | 2025-06-22 12:03:58.959475 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-06-22 12:03:58.959482 | orchestrator | Sunday 22 June 2025 12:00:07 +0000 (0:00:00.592) 0:02:40.799 *********** 2025-06-22 12:03:58.959489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 12:03:58.959496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 12:03:58.959504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 12:03:58.959512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 12:03:58.959522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 12:03:58.959529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 12:03:58.959544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 12:03:58.959552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 12:03:58.959558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-22 12:03:58.959565 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.959572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-22 12:03:58.959578 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.959585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 12:03:58.959592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 12:03:58.959602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 12:03:58.959613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 12:03:58.959622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-22 12:03:58.959631 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.959640 | orchestrator | 2025-06-22 12:03:58.959650 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-06-22 12:03:58.959667 | orchestrator | Sunday 22 June 2025 12:00:08 +0000 (0:00:00.942) 0:02:41.742 *********** 2025-06-22 12:03:58.959680 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.959690 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.959701 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.959711 | orchestrator | 2025-06-22 12:03:58.959722 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-06-22 12:03:58.959738 | orchestrator | Sunday 22 June 2025 12:00:10 +0000 (0:00:01.796) 0:02:43.539 *********** 2025-06-22 12:03:58.959750 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.959762 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.959772 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.959783 | orchestrator | 2025-06-22 12:03:58.959795 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-06-22 12:03:58.959815 | orchestrator | Sunday 22 June 2025 12:00:12 +0000 (0:00:02.116) 0:02:45.655 *********** 2025-06-22 12:03:58.959824 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.959831 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.959838 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.959844 | orchestrator | 2025-06-22 12:03:58.959851 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-06-22 12:03:58.959864 | orchestrator | Sunday 22 June 2025 12:00:12 +0000 (0:00:00.359) 0:02:46.014 *********** 2025-06-22 12:03:58.959871 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.959878 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.959885 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.959892 | orchestrator | 2025-06-22 12:03:58.959898 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-06-22 12:03:58.959905 | orchestrator | Sunday 22 June 2025 12:00:13 +0000 (0:00:00.354) 0:02:46.368 *********** 2025-06-22 12:03:58.959912 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.959918 | orchestrator | 2025-06-22 12:03:58.959925 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-06-22 12:03:58.959932 | orchestrator | Sunday 22 June 2025 12:00:14 +0000 (0:00:01.290) 0:02:47.658 *********** 2025-06-22 12:03:58.959946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:03:58.959955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 12:03:58.959963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:03:58.959987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 12:03:58.959998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 12:03:58.960010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 12:03:58.960017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:03:58.960025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 12:03:58.960032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 12:03:58.960045 | orchestrator | 2025-06-22 12:03:58.960051 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-06-22 12:03:58.960058 | orchestrator | Sunday 22 June 2025 12:00:18 +0000 (0:00:03.603) 0:02:51.261 *********** 2025-06-22 12:03:58.960069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 12:03:58.960081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 12:03:58.960088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 12:03:58.960095 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.960103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 12:03:58.960115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 12:03:58.960122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 12:03:58.960132 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.960277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 12:03:58.960290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 12:03:58.960297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 12:03:58.960304 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.960311 | orchestrator | 2025-06-22 12:03:58.960318 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-06-22 12:03:58.960324 | orchestrator | Sunday 22 June 2025 12:00:18 +0000 (0:00:00.667) 0:02:51.929 *********** 2025-06-22 12:03:58.960332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 12:03:58.960345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 12:03:58.960352 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.960410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 12:03:58.960418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 12:03:58.960425 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.960432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 12:03:58.960444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 12:03:58.960451 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.960457 | orchestrator | 2025-06-22 12:03:58.960464 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-06-22 12:03:58.960471 | orchestrator | Sunday 22 June 2025 12:00:19 +0000 (0:00:01.110) 0:02:53.040 *********** 2025-06-22 12:03:58.960477 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.960484 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.960490 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.960497 | orchestrator | 2025-06-22 12:03:58.960504 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-06-22 12:03:58.960511 | orchestrator | Sunday 22 June 2025 12:00:21 +0000 (0:00:01.331) 0:02:54.372 *********** 2025-06-22 12:03:58.960517 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.960524 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.960530 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.960537 | orchestrator | 2025-06-22 12:03:58.960544 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-06-22 12:03:58.960555 | orchestrator | Sunday 22 June 2025 12:00:23 +0000 (0:00:02.136) 0:02:56.508 *********** 2025-06-22 12:03:58.960562 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.960569 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.960576 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.960582 | orchestrator | 2025-06-22 12:03:58.960589 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-06-22 12:03:58.960595 | orchestrator | Sunday 22 June 2025 12:00:23 +0000 (0:00:00.329) 0:02:56.838 *********** 2025-06-22 12:03:58.960602 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.960609 | orchestrator | 2025-06-22 12:03:58.960615 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-06-22 12:03:58.960622 | orchestrator | Sunday 22 June 2025 12:00:25 +0000 (0:00:01.306) 0:02:58.144 *********** 2025-06-22 12:03:58.960629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:03:58.960643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.960650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:03:58.960661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.960672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:03:58.960684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.960691 | orchestrator | 2025-06-22 12:03:58.960698 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-06-22 12:03:58.960704 | orchestrator | Sunday 22 June 2025 12:00:28 +0000 (0:00:03.468) 0:03:01.613 *********** 2025-06-22 12:03:58.960712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 12:03:58.960722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.960729 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.960740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 12:03:58.960747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.960758 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.960765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 12:03:58.960773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.960780 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.960786 | orchestrator | 2025-06-22 12:03:58.960796 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-06-22 12:03:58.960807 | orchestrator | Sunday 22 June 2025 12:00:29 +0000 (0:00:00.810) 0:03:02.424 *********** 2025-06-22 12:03:58.960818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-22 12:03:58.960830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-22 12:03:58.960842 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.960857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-22 12:03:58.960870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-22 12:03:58.960881 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.960892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-22 12:03:58.960904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-22 12:03:58.960933 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.960945 | orchestrator | 2025-06-22 12:03:58.960956 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-06-22 12:03:58.960967 | orchestrator | Sunday 22 June 2025 12:00:30 +0000 (0:00:01.432) 0:03:03.857 *********** 2025-06-22 12:03:58.960979 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.960989 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.961001 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.961009 | orchestrator | 2025-06-22 12:03:58.961016 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-06-22 12:03:58.961023 | orchestrator | Sunday 22 June 2025 12:00:32 +0000 (0:00:01.376) 0:03:05.234 *********** 2025-06-22 12:03:58.961030 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.961038 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.961045 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.961052 | orchestrator | 2025-06-22 12:03:58.961059 | orchestrator | TASK [include_role : manila] *************************************************** 2025-06-22 12:03:58.961066 | orchestrator | Sunday 22 June 2025 12:00:34 +0000 (0:00:02.238) 0:03:07.472 *********** 2025-06-22 12:03:58.961073 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.961080 | orchestrator | 2025-06-22 12:03:58.961087 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-06-22 12:03:58.961094 | orchestrator | Sunday 22 June 2025 12:00:35 +0000 (0:00:01.046) 0:03:08.518 *********** 2025-06-22 12:03:58.961102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-22 12:03:58.961111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.961118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.961130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.961148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-22 12:03:58.961156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.961163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.961171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.961179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-22 12:03:58.961190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.961203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.961225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.961233 | orchestrator | 2025-06-22 12:03:58.961240 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-06-22 12:03:58.961246 | orchestrator | Sunday 22 June 2025 12:00:39 +0000 (0:00:04.219) 0:03:12.738 *********** 2025-06-22 12:03:58.961253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-22 12:03:58.961259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.961269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.961280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.961286 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.961297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-22 12:03:58.961303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.961310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.961316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.961323 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.961336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-22 12:03:58.961347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.961353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.961382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.961388 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.961395 | orchestrator | 2025-06-22 12:03:58.961402 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-06-22 12:03:58.961413 | orchestrator | Sunday 22 June 2025 12:00:40 +0000 (0:00:00.733) 0:03:13.471 *********** 2025-06-22 12:03:58.961424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-22 12:03:58.961434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-22 12:03:58.961445 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.961460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-22 12:03:58.961472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-22 12:03:58.961491 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.961501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-22 12:03:58.961511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-22 12:03:58.961522 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.961529 | orchestrator | 2025-06-22 12:03:58.961535 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-06-22 12:03:58.961541 | orchestrator | Sunday 22 June 2025 12:00:41 +0000 (0:00:01.271) 0:03:14.743 *********** 2025-06-22 12:03:58.961547 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.961554 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.961560 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.961566 | orchestrator | 2025-06-22 12:03:58.961572 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-06-22 12:03:58.961583 | orchestrator | Sunday 22 June 2025 12:00:43 +0000 (0:00:01.680) 0:03:16.423 *********** 2025-06-22 12:03:58.961589 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.961595 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.961601 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.961608 | orchestrator | 2025-06-22 12:03:58.961614 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-06-22 12:03:58.961620 | orchestrator | Sunday 22 June 2025 12:00:45 +0000 (0:00:02.210) 0:03:18.633 *********** 2025-06-22 12:03:58.961627 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.961633 | orchestrator | 2025-06-22 12:03:58.961639 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-06-22 12:03:58.961645 | orchestrator | Sunday 22 June 2025 12:00:46 +0000 (0:00:01.128) 0:03:19.762 *********** 2025-06-22 12:03:58.961652 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 12:03:58.961658 | orchestrator | 2025-06-22 12:03:58.961664 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-06-22 12:03:58.961670 | orchestrator | Sunday 22 June 2025 12:00:49 +0000 (0:00:03.141) 0:03:22.903 *********** 2025-06-22 12:03:58.961685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 12:03:58.961697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 12:03:58.961704 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.961718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 12:03:58.961725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 12:03:58.961732 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.961739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 12:03:58.961753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 12:03:58.961760 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.961766 | orchestrator | 2025-06-22 12:03:58.961772 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-06-22 12:03:58.961779 | orchestrator | Sunday 22 June 2025 12:00:52 +0000 (0:00:02.994) 0:03:25.897 *********** 2025-06-22 12:03:58.961790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 12:03:58.961805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 12:03:58.961817 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.961832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 12:03:58.961849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 12:03:58.961862 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.961873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 12:03:58.961890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 12:03:58.961896 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.961903 | orchestrator | 2025-06-22 12:03:58.961909 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-06-22 12:03:58.961915 | orchestrator | Sunday 22 June 2025 12:00:55 +0000 (0:00:03.005) 0:03:28.903 *********** 2025-06-22 12:03:58.961925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 12:03:58.961935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 12:03:58.961942 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.961948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 12:03:58.961959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 12:03:58.961965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 12:03:58.961972 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.961978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 12:03:58.961984 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.961991 | orchestrator | 2025-06-22 12:03:58.961997 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-06-22 12:03:58.962003 | orchestrator | Sunday 22 June 2025 12:00:58 +0000 (0:00:02.471) 0:03:31.374 *********** 2025-06-22 12:03:58.962009 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.962047 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.962055 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.962061 | orchestrator | 2025-06-22 12:03:58.962068 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-06-22 12:03:58.962074 | orchestrator | Sunday 22 June 2025 12:01:00 +0000 (0:00:01.781) 0:03:33.156 *********** 2025-06-22 12:03:58.962080 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.962086 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.962093 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.962099 | orchestrator | 2025-06-22 12:03:58.962105 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-06-22 12:03:58.962111 | orchestrator | Sunday 22 June 2025 12:01:01 +0000 (0:00:01.238) 0:03:34.394 *********** 2025-06-22 12:03:58.962121 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.962152 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.962159 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.962165 | orchestrator | 2025-06-22 12:03:58.962172 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-06-22 12:03:58.962178 | orchestrator | Sunday 22 June 2025 12:01:01 +0000 (0:00:00.273) 0:03:34.668 *********** 2025-06-22 12:03:58.962184 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.962190 | orchestrator | 2025-06-22 12:03:58.962196 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-06-22 12:03:58.962202 | orchestrator | Sunday 22 June 2025 12:01:02 +0000 (0:00:01.100) 0:03:35.769 *********** 2025-06-22 12:03:58.962220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-22 12:03:58.962233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-22 12:03:58.962240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-22 12:03:58.962247 | orchestrator | 2025-06-22 12:03:58.962253 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-06-22 12:03:58.962260 | orchestrator | Sunday 22 June 2025 12:01:04 +0000 (0:00:01.740) 0:03:37.509 *********** 2025-06-22 12:03:58.962266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-22 12:03:58.962276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-22 12:03:58.962287 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.962293 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.962304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-22 12:03:58.962311 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.962317 | orchestrator | 2025-06-22 12:03:58.962323 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-06-22 12:03:58.962329 | orchestrator | Sunday 22 June 2025 12:01:04 +0000 (0:00:00.421) 0:03:37.931 *********** 2025-06-22 12:03:58.962336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-22 12:03:58.962343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-22 12:03:58.962350 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.962374 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.962382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-22 12:03:58.962388 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.962394 | orchestrator | 2025-06-22 12:03:58.962400 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-06-22 12:03:58.962407 | orchestrator | Sunday 22 June 2025 12:01:05 +0000 (0:00:00.634) 0:03:38.565 *********** 2025-06-22 12:03:58.962413 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.962419 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.962425 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.962431 | orchestrator | 2025-06-22 12:03:58.962437 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-06-22 12:03:58.962443 | orchestrator | Sunday 22 June 2025 12:01:06 +0000 (0:00:00.799) 0:03:39.364 *********** 2025-06-22 12:03:58.962450 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.962456 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.962462 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.962468 | orchestrator | 2025-06-22 12:03:58.962474 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-06-22 12:03:58.962480 | orchestrator | Sunday 22 June 2025 12:01:07 +0000 (0:00:01.328) 0:03:40.692 *********** 2025-06-22 12:03:58.962486 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.962492 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.962498 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.962505 | orchestrator | 2025-06-22 12:03:58.962511 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-06-22 12:03:58.962517 | orchestrator | Sunday 22 June 2025 12:01:07 +0000 (0:00:00.325) 0:03:41.018 *********** 2025-06-22 12:03:58.962523 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.962533 | orchestrator | 2025-06-22 12:03:58.962540 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-06-22 12:03:58.962546 | orchestrator | Sunday 22 June 2025 12:01:09 +0000 (0:00:01.454) 0:03:42.472 *********** 2025-06-22 12:03:58.962555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:03:58.962567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 12:03:58.962624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 12:03:58.962642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 12:03:58.962649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:03:58.962663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 12:03:58.962682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 12:03:58.962693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:03:58.962700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 12:03:58.962714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:03:58.962741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 12:03:58.962772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 12:03:58.962788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 12:03:58.962799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:03:58.962812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:03:58.962823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 12:03:58.962857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 12:03:58.962863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 12:03:58.962894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 12:03:58.962901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:03:58.962919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 12:03:58.962925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 12:03:58.962946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:03:58.962959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 12:03:58.962975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 12:03:58.962985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.962995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 12:03:58.963002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:03:58.963009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963019 | orchestrator | 2025-06-22 12:03:58.963026 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-06-22 12:03:58.963032 | orchestrator | Sunday 22 June 2025 12:01:13 +0000 (0:00:04.475) 0:03:46.947 *********** 2025-06-22 12:03:58.963039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:03:58.963049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 12:03:58.963083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 12:03:58.963101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 12:03:58.963108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:03:58.963130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:03:58.963136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 12:03:58.963170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 12:03:58.963186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 12:03:58.963193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 12:03:58.963234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 12:03:58.963241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:03:58.963248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:03:58.963254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 12:03:58.963264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963285 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.963292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:03:58.963315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 12:03:58.963500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 12:03:58.963507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 12:03:58.963520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 12:03:58.963531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 12:03:58.963553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 12:03:58.963561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:03:58.963574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:03:58.963580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963601 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.963608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 12:03:58.963614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 12:03:58.963621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 12:03:58.963707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:03:58.963723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.963730 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.963737 | orchestrator | 2025-06-22 12:03:58.963743 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-06-22 12:03:58.963749 | orchestrator | Sunday 22 June 2025 12:01:15 +0000 (0:00:01.461) 0:03:48.409 *********** 2025-06-22 12:03:58.963756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-22 12:03:58.963764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-22 12:03:58.963770 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.963776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-22 12:03:58.963782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-22 12:03:58.963788 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.963795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-22 12:03:58.963801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-22 12:03:58.963807 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.963813 | orchestrator | 2025-06-22 12:03:58.963819 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-06-22 12:03:58.963825 | orchestrator | Sunday 22 June 2025 12:01:17 +0000 (0:00:02.115) 0:03:50.525 *********** 2025-06-22 12:03:58.963831 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.963838 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.963844 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.963850 | orchestrator | 2025-06-22 12:03:58.963856 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-06-22 12:03:58.963862 | orchestrator | Sunday 22 June 2025 12:01:18 +0000 (0:00:01.348) 0:03:51.874 *********** 2025-06-22 12:03:58.963868 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.963874 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.963880 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.963886 | orchestrator | 2025-06-22 12:03:58.963892 | orchestrator | TASK [include_role : placement] ************************************************ 2025-06-22 12:03:58.963899 | orchestrator | Sunday 22 June 2025 12:01:20 +0000 (0:00:02.078) 0:03:53.952 *********** 2025-06-22 12:03:58.963909 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.963915 | orchestrator | 2025-06-22 12:03:58.963921 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-06-22 12:03:58.963927 | orchestrator | Sunday 22 June 2025 12:01:22 +0000 (0:00:01.169) 0:03:55.122 *********** 2025-06-22 12:03:58.963937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.963948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.963955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.963961 | orchestrator | 2025-06-22 12:03:58.963968 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-06-22 12:03:58.963974 | orchestrator | Sunday 22 June 2025 12:01:25 +0000 (0:00:03.691) 0:03:58.814 *********** 2025-06-22 12:03:58.963980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.963990 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.964000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.964006 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.964016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.964023 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.964029 | orchestrator | 2025-06-22 12:03:58.964035 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-06-22 12:03:58.964041 | orchestrator | Sunday 22 June 2025 12:01:26 +0000 (0:00:00.518) 0:03:59.332 *********** 2025-06-22 12:03:58.964047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 12:03:58.964054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 12:03:58.964061 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.964067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 12:03:58.964073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 12:03:58.964080 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.964087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 12:03:58.964098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 12:03:58.964106 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.964113 | orchestrator | 2025-06-22 12:03:58.964120 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-06-22 12:03:58.964127 | orchestrator | Sunday 22 June 2025 12:01:27 +0000 (0:00:00.838) 0:04:00.171 *********** 2025-06-22 12:03:58.964134 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.964141 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.964148 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.964155 | orchestrator | 2025-06-22 12:03:58.964163 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-06-22 12:03:58.964170 | orchestrator | Sunday 22 June 2025 12:01:28 +0000 (0:00:01.708) 0:04:01.879 *********** 2025-06-22 12:03:58.964177 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.964184 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.964192 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.964199 | orchestrator | 2025-06-22 12:03:58.964206 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-06-22 12:03:58.964213 | orchestrator | Sunday 22 June 2025 12:01:30 +0000 (0:00:02.137) 0:04:04.017 *********** 2025-06-22 12:03:58.964221 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.964228 | orchestrator | 2025-06-22 12:03:58.964235 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-06-22 12:03:58.964242 | orchestrator | Sunday 22 June 2025 12:01:32 +0000 (0:00:01.281) 0:04:05.298 *********** 2025-06-22 12:03:58.964256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.964265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.964273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.964284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.964294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.964301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.964312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.964323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.964330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.964336 | orchestrator | 2025-06-22 12:03:58.964342 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-06-22 12:03:58.964349 | orchestrator | Sunday 22 June 2025 12:01:36 +0000 (0:00:04.592) 0:04:09.891 *********** 2025-06-22 12:03:58.964406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.964420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.964427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.964438 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.964445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.964452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.964463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.964470 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.964481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.964488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.964499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.964505 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.964512 | orchestrator | 2025-06-22 12:03:58.964518 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-06-22 12:03:58.964524 | orchestrator | Sunday 22 June 2025 12:01:37 +0000 (0:00:01.017) 0:04:10.909 *********** 2025-06-22 12:03:58.964531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 12:03:58.964538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 12:03:58.964544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 12:03:58.964551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 12:03:58.964556 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.964562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 12:03:58.964570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 12:03:58.964576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 12:03:58.964581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 12:03:58.964587 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.964595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 12:03:58.964601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 12:03:58.964610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 12:03:58.964616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 12:03:58.964622 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.964627 | orchestrator | 2025-06-22 12:03:58.964632 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-06-22 12:03:58.964638 | orchestrator | Sunday 22 June 2025 12:01:38 +0000 (0:00:00.911) 0:04:11.820 *********** 2025-06-22 12:03:58.964643 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.964649 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.964654 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.964659 | orchestrator | 2025-06-22 12:03:58.964665 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-06-22 12:03:58.964670 | orchestrator | Sunday 22 June 2025 12:01:40 +0000 (0:00:01.755) 0:04:13.576 *********** 2025-06-22 12:03:58.964675 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.964681 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.964686 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.964691 | orchestrator | 2025-06-22 12:03:58.964697 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-06-22 12:03:58.964702 | orchestrator | Sunday 22 June 2025 12:01:42 +0000 (0:00:02.124) 0:04:15.701 *********** 2025-06-22 12:03:58.964707 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.964713 | orchestrator | 2025-06-22 12:03:58.964718 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-06-22 12:03:58.964723 | orchestrator | Sunday 22 June 2025 12:01:44 +0000 (0:00:01.627) 0:04:17.329 *********** 2025-06-22 12:03:58.964729 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-06-22 12:03:58.964734 | orchestrator | 2025-06-22 12:03:58.964740 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-06-22 12:03:58.964745 | orchestrator | Sunday 22 June 2025 12:01:45 +0000 (0:00:01.165) 0:04:18.494 *********** 2025-06-22 12:03:58.964751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-22 12:03:58.964757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-22 12:03:58.964766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-22 12:03:58.964775 | orchestrator | 2025-06-22 12:03:58.964781 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-06-22 12:03:58.964786 | orchestrator | Sunday 22 June 2025 12:01:49 +0000 (0:00:03.973) 0:04:22.467 *********** 2025-06-22 12:03:58.964796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 12:03:58.964802 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.964807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 12:03:58.964813 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.964819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 12:03:58.964824 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.964830 | orchestrator | 2025-06-22 12:03:58.964835 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-06-22 12:03:58.964841 | orchestrator | Sunday 22 June 2025 12:01:50 +0000 (0:00:01.543) 0:04:24.011 *********** 2025-06-22 12:03:58.964846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 12:03:58.964852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 12:03:58.964858 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.964863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 12:03:58.964869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 12:03:58.964874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 12:03:58.964882 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.964892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 12:03:58.964907 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.964916 | orchestrator | 2025-06-22 12:03:58.964925 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-22 12:03:58.964934 | orchestrator | Sunday 22 June 2025 12:01:52 +0000 (0:00:01.917) 0:04:25.928 *********** 2025-06-22 12:03:58.964947 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.964957 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.964966 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.964976 | orchestrator | 2025-06-22 12:03:58.964985 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-22 12:03:58.964995 | orchestrator | Sunday 22 June 2025 12:01:55 +0000 (0:00:02.704) 0:04:28.633 *********** 2025-06-22 12:03:58.965006 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.965014 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.965024 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.965030 | orchestrator | 2025-06-22 12:03:58.965035 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-06-22 12:03:58.965041 | orchestrator | Sunday 22 June 2025 12:01:58 +0000 (0:00:03.061) 0:04:31.694 *********** 2025-06-22 12:03:58.965046 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-06-22 12:03:58.965052 | orchestrator | 2025-06-22 12:03:58.965057 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-06-22 12:03:58.965067 | orchestrator | Sunday 22 June 2025 12:01:59 +0000 (0:00:00.834) 0:04:32.529 *********** 2025-06-22 12:03:58.965073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 12:03:58.965079 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.965085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 12:03:58.965091 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.965096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 12:03:58.965102 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.965107 | orchestrator | 2025-06-22 12:03:58.965112 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-06-22 12:03:58.965118 | orchestrator | Sunday 22 June 2025 12:02:00 +0000 (0:00:01.287) 0:04:33.817 *********** 2025-06-22 12:03:58.965123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 12:03:58.965134 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.965139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 12:03:58.965145 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.965153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 12:03:58.965159 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.965165 | orchestrator | 2025-06-22 12:03:58.965170 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-06-22 12:03:58.965175 | orchestrator | Sunday 22 June 2025 12:02:02 +0000 (0:00:01.676) 0:04:35.494 *********** 2025-06-22 12:03:58.965181 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.965186 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.965191 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.965197 | orchestrator | 2025-06-22 12:03:58.965202 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-22 12:03:58.965210 | orchestrator | Sunday 22 June 2025 12:02:03 +0000 (0:00:01.221) 0:04:36.715 *********** 2025-06-22 12:03:58.965216 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:03:58.965221 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:03:58.965227 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:03:58.965232 | orchestrator | 2025-06-22 12:03:58.965238 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-22 12:03:58.965243 | orchestrator | Sunday 22 June 2025 12:02:06 +0000 (0:00:02.497) 0:04:39.213 *********** 2025-06-22 12:03:58.965248 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:03:58.965254 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:03:58.965259 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:03:58.965264 | orchestrator | 2025-06-22 12:03:58.965270 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-06-22 12:03:58.965275 | orchestrator | Sunday 22 June 2025 12:02:09 +0000 (0:00:03.170) 0:04:42.384 *********** 2025-06-22 12:03:58.965280 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-06-22 12:03:58.965286 | orchestrator | 2025-06-22 12:03:58.965291 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-06-22 12:03:58.965297 | orchestrator | Sunday 22 June 2025 12:02:10 +0000 (0:00:01.074) 0:04:43.458 *********** 2025-06-22 12:03:58.965302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 12:03:58.965312 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.965317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 12:03:58.965323 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.965329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 12:03:58.965334 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.965340 | orchestrator | 2025-06-22 12:03:58.965345 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-06-22 12:03:58.965350 | orchestrator | Sunday 22 June 2025 12:02:11 +0000 (0:00:01.015) 0:04:44.474 *********** 2025-06-22 12:03:58.965370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 12:03:58.965376 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.965382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 12:03:58.965388 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.965397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 12:03:58.965402 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.965408 | orchestrator | 2025-06-22 12:03:58.965413 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-06-22 12:03:58.965419 | orchestrator | Sunday 22 June 2025 12:02:12 +0000 (0:00:01.317) 0:04:45.791 *********** 2025-06-22 12:03:58.965424 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.965430 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.965439 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.965444 | orchestrator | 2025-06-22 12:03:58.965450 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-22 12:03:58.965455 | orchestrator | Sunday 22 June 2025 12:02:14 +0000 (0:00:01.872) 0:04:47.664 *********** 2025-06-22 12:03:58.965460 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:03:58.965466 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:03:58.965471 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:03:58.965476 | orchestrator | 2025-06-22 12:03:58.965482 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-22 12:03:58.965487 | orchestrator | Sunday 22 June 2025 12:02:16 +0000 (0:00:02.313) 0:04:49.977 *********** 2025-06-22 12:03:58.965493 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:03:58.965498 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:03:58.965503 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:03:58.965509 | orchestrator | 2025-06-22 12:03:58.965514 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-06-22 12:03:58.965519 | orchestrator | Sunday 22 June 2025 12:02:20 +0000 (0:00:03.254) 0:04:53.232 *********** 2025-06-22 12:03:58.965525 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.965530 | orchestrator | 2025-06-22 12:03:58.965536 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-06-22 12:03:58.965541 | orchestrator | Sunday 22 June 2025 12:02:21 +0000 (0:00:01.363) 0:04:54.595 *********** 2025-06-22 12:03:58.965546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.965552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 12:03:58.965561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 12:03:58.965570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 12:03:58.965580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.965586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.965591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 12:03:58.965597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 12:03:58.965607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 12:03:58.965716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.965730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.965736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 12:03:58.965741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 12:03:58.965747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 12:03:58.965756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.965762 | orchestrator | 2025-06-22 12:03:58.965767 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-06-22 12:03:58.965773 | orchestrator | Sunday 22 June 2025 12:02:25 +0000 (0:00:03.874) 0:04:58.469 *********** 2025-06-22 12:03:58.965796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.965807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 12:03:58.965813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 12:03:58.965818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 12:03:58.965824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.965830 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.965838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.965864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 12:03:58.965870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 12:03:58.965876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 12:03:58.965882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.965887 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.965893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.965902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 12:03:58.965927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 12:03:58.965933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 12:03:58.965939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:03:58.965945 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.965950 | orchestrator | 2025-06-22 12:03:58.965956 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-06-22 12:03:58.965961 | orchestrator | Sunday 22 June 2025 12:02:26 +0000 (0:00:00.750) 0:04:59.220 *********** 2025-06-22 12:03:58.965966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 12:03:58.965972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 12:03:58.965978 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.965983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 12:03:58.965988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 12:03:58.965994 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.966000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 12:03:58.966005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 12:03:58.966037 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.966044 | orchestrator | 2025-06-22 12:03:58.966050 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-06-22 12:03:58.966055 | orchestrator | Sunday 22 June 2025 12:02:27 +0000 (0:00:00.888) 0:05:00.109 *********** 2025-06-22 12:03:58.966060 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.966069 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.966074 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.966080 | orchestrator | 2025-06-22 12:03:58.966085 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-06-22 12:03:58.966090 | orchestrator | Sunday 22 June 2025 12:02:28 +0000 (0:00:01.885) 0:05:01.994 *********** 2025-06-22 12:03:58.966096 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.966101 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.966107 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.966112 | orchestrator | 2025-06-22 12:03:58.966117 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-06-22 12:03:58.966123 | orchestrator | Sunday 22 June 2025 12:02:31 +0000 (0:00:02.115) 0:05:04.110 *********** 2025-06-22 12:03:58.966128 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.966133 | orchestrator | 2025-06-22 12:03:58.966139 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-06-22 12:03:58.966144 | orchestrator | Sunday 22 June 2025 12:02:32 +0000 (0:00:01.308) 0:05:05.418 *********** 2025-06-22 12:03:58.966169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 12:03:58.966176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 12:03:58.966182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 12:03:58.966196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 12:03:58.966218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 12:03:58.966226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 12:03:58.966232 | orchestrator | 2025-06-22 12:03:58.966237 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-06-22 12:03:58.966243 | orchestrator | Sunday 22 June 2025 12:02:37 +0000 (0:00:05.348) 0:05:10.767 *********** 2025-06-22 12:03:58.966249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 12:03:58.966262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 12:03:58.966268 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.966290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 12:03:58.966297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 12:03:58.966303 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.966309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 12:03:58.966327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 12:03:58.966352 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.966373 | orchestrator | 2025-06-22 12:03:58.966380 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-06-22 12:03:58.966386 | orchestrator | Sunday 22 June 2025 12:02:38 +0000 (0:00:01.126) 0:05:11.893 *********** 2025-06-22 12:03:58.966393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-22 12:03:58.966417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 12:03:58.966424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 12:03:58.966431 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.966437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-22 12:03:58.966444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 12:03:58.966450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 12:03:58.966457 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.966463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-22 12:03:58.966470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 12:03:58.966481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 12:03:58.966487 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.966493 | orchestrator | 2025-06-22 12:03:58.966500 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-06-22 12:03:58.966506 | orchestrator | Sunday 22 June 2025 12:02:39 +0000 (0:00:00.911) 0:05:12.804 *********** 2025-06-22 12:03:58.966512 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.966518 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.966524 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.966531 | orchestrator | 2025-06-22 12:03:58.966537 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-06-22 12:03:58.966544 | orchestrator | Sunday 22 June 2025 12:02:40 +0000 (0:00:00.471) 0:05:13.276 *********** 2025-06-22 12:03:58.966550 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.966556 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.966562 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.966569 | orchestrator | 2025-06-22 12:03:58.966575 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-06-22 12:03:58.966581 | orchestrator | Sunday 22 June 2025 12:02:41 +0000 (0:00:01.555) 0:05:14.832 *********** 2025-06-22 12:03:58.966588 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.966594 | orchestrator | 2025-06-22 12:03:58.966600 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-06-22 12:03:58.966606 | orchestrator | Sunday 22 June 2025 12:02:43 +0000 (0:00:01.823) 0:05:16.655 *********** 2025-06-22 12:03:58.966616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 12:03:58.966623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:03:58.966646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.966654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.966665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:03:58.966672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 12:03:58.966678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 12:03:58.966687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:03:58.966710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:03:58.966717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.966726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.966732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.966738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.966743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:03:58.966752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:03:58.966761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 12:03:58.966772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 12:03:58.966778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 12:03:58.966784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 12:03:58.966793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.966802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 12:03:58.966815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.966820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.966826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 12:03:58.966832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 12:03:58.966852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.966858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.966867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 12:03:58.966878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.966884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 12:03:58.966890 | orchestrator | 2025-06-22 12:03:58.966895 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-06-22 12:03:58.966901 | orchestrator | Sunday 22 June 2025 12:02:47 +0000 (0:00:04.189) 0:05:20.845 *********** 2025-06-22 12:03:58.966906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 12:03:58.966912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:03:58.966921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.966927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.966939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:03:58.966945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 12:03:58.966952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 12:03:58.966958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.966966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.966972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 12:03:58.966986 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.966995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 12:03:58.967001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:03:58.967007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.967012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.967018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:03:58.967026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 12:03:58.967040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 12:03:58.967046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 12:03:58.967052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.967057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:03:58.967063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.967069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.967079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 12:03:58.967088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.967094 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.967100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:03:58.967106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 12:03:58.967129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 12:03:58.967135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.967148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:03:58.967156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 12:03:58.967162 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.967168 | orchestrator | 2025-06-22 12:03:58.967173 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-06-22 12:03:58.967179 | orchestrator | Sunday 22 June 2025 12:02:49 +0000 (0:00:01.605) 0:05:22.450 *********** 2025-06-22 12:03:58.967184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-22 12:03:58.967190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-22 12:03:58.967196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 12:03:58.967202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 12:03:58.967207 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.967213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-22 12:03:58.967218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-22 12:03:58.967224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 12:03:58.967230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 12:03:58.967243 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.967253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-22 12:03:58.967259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-22 12:03:58.967264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 12:03:58.967273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 12:03:58.967279 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.967284 | orchestrator | 2025-06-22 12:03:58.967289 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-06-22 12:03:58.967295 | orchestrator | Sunday 22 June 2025 12:02:50 +0000 (0:00:01.097) 0:05:23.547 *********** 2025-06-22 12:03:58.967300 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.967306 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.967311 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.967316 | orchestrator | 2025-06-22 12:03:58.967322 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-06-22 12:03:58.967327 | orchestrator | Sunday 22 June 2025 12:02:50 +0000 (0:00:00.461) 0:05:24.009 *********** 2025-06-22 12:03:58.967335 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.967341 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.967346 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.967352 | orchestrator | 2025-06-22 12:03:58.967369 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-06-22 12:03:58.967374 | orchestrator | Sunday 22 June 2025 12:02:52 +0000 (0:00:01.796) 0:05:25.805 *********** 2025-06-22 12:03:58.967380 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.967385 | orchestrator | 2025-06-22 12:03:58.967391 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-06-22 12:03:58.967396 | orchestrator | Sunday 22 June 2025 12:02:54 +0000 (0:00:01.745) 0:05:27.551 *********** 2025-06-22 12:03:58.967402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 12:03:58.967408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 12:03:58.967421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 12:03:58.967427 | orchestrator | 2025-06-22 12:03:58.967433 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-06-22 12:03:58.967438 | orchestrator | Sunday 22 June 2025 12:02:57 +0000 (0:00:02.829) 0:05:30.380 *********** 2025-06-22 12:03:58.967447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-22 12:03:58.967453 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.967459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-22 12:03:58.967468 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.967474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-22 12:03:58.967480 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.967485 | orchestrator | 2025-06-22 12:03:58.967491 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-06-22 12:03:58.967496 | orchestrator | Sunday 22 June 2025 12:02:57 +0000 (0:00:00.449) 0:05:30.829 *********** 2025-06-22 12:03:58.967501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-22 12:03:58.967507 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.967512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-22 12:03:58.967518 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.967526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-22 12:03:58.967531 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.967537 | orchestrator | 2025-06-22 12:03:58.967542 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-06-22 12:03:58.967547 | orchestrator | Sunday 22 June 2025 12:02:58 +0000 (0:00:01.206) 0:05:32.035 *********** 2025-06-22 12:03:58.967553 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.967558 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.967564 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.967569 | orchestrator | 2025-06-22 12:03:58.967574 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-06-22 12:03:58.967580 | orchestrator | Sunday 22 June 2025 12:02:59 +0000 (0:00:00.507) 0:05:32.543 *********** 2025-06-22 12:03:58.967585 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.967591 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.967596 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.967601 | orchestrator | 2025-06-22 12:03:58.967607 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-06-22 12:03:58.967615 | orchestrator | Sunday 22 June 2025 12:03:00 +0000 (0:00:01.407) 0:05:33.950 *********** 2025-06-22 12:03:58.967620 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:03:58.967626 | orchestrator | 2025-06-22 12:03:58.967631 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-06-22 12:03:58.967637 | orchestrator | Sunday 22 June 2025 12:03:02 +0000 (0:00:02.039) 0:05:35.989 *********** 2025-06-22 12:03:58.967643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.967652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.967659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.967667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.967676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.967688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-22 12:03:58.967693 | orchestrator | 2025-06-22 12:03:58.967699 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-06-22 12:03:58.967704 | orchestrator | Sunday 22 June 2025 12:03:09 +0000 (0:00:06.247) 0:05:42.237 *********** 2025-06-22 12:03:58.967710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.967718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.967724 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.967733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.967742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.967748 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.967754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.967762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-22 12:03:58.967768 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.967773 | orchestrator | 2025-06-22 12:03:58.967778 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-06-22 12:03:58.967784 | orchestrator | Sunday 22 June 2025 12:03:09 +0000 (0:00:00.647) 0:05:42.885 *********** 2025-06-22 12:03:58.967789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 12:03:58.967797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 12:03:58.967809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 12:03:58.967818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 12:03:58.967827 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.967835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 12:03:58.967844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 12:03:58.967854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 12:03:58.967863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 12:03:58.967874 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.967883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 12:03:58.967894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 12:03:58.967900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 12:03:58.967905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 12:03:58.967911 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.967916 | orchestrator | 2025-06-22 12:03:58.967922 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-06-22 12:03:58.967927 | orchestrator | Sunday 22 June 2025 12:03:11 +0000 (0:00:01.782) 0:05:44.667 *********** 2025-06-22 12:03:58.967932 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.967938 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.967943 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.967948 | orchestrator | 2025-06-22 12:03:58.967954 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-06-22 12:03:58.967959 | orchestrator | Sunday 22 June 2025 12:03:12 +0000 (0:00:01.384) 0:05:46.052 *********** 2025-06-22 12:03:58.967964 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.967970 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.967975 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.967980 | orchestrator | 2025-06-22 12:03:58.967985 | orchestrator | TASK [include_role : swift] **************************************************** 2025-06-22 12:03:58.967991 | orchestrator | Sunday 22 June 2025 12:03:15 +0000 (0:00:02.313) 0:05:48.366 *********** 2025-06-22 12:03:58.968003 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.968008 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.968014 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.968019 | orchestrator | 2025-06-22 12:03:58.968025 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-06-22 12:03:58.968030 | orchestrator | Sunday 22 June 2025 12:03:15 +0000 (0:00:00.353) 0:05:48.719 *********** 2025-06-22 12:03:58.968035 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.968041 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.968046 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.968051 | orchestrator | 2025-06-22 12:03:58.968057 | orchestrator | TASK [include_role : trove] **************************************************** 2025-06-22 12:03:58.968062 | orchestrator | Sunday 22 June 2025 12:03:16 +0000 (0:00:00.749) 0:05:49.469 *********** 2025-06-22 12:03:58.968067 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.968141 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.968147 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.968152 | orchestrator | 2025-06-22 12:03:58.968158 | orchestrator | TASK [include_role : venus] **************************************************** 2025-06-22 12:03:58.968164 | orchestrator | Sunday 22 June 2025 12:03:16 +0000 (0:00:00.331) 0:05:49.800 *********** 2025-06-22 12:03:58.968178 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.968184 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.968190 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.968195 | orchestrator | 2025-06-22 12:03:58.968201 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-06-22 12:03:58.968206 | orchestrator | Sunday 22 June 2025 12:03:17 +0000 (0:00:00.341) 0:05:50.142 *********** 2025-06-22 12:03:58.968212 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.968217 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.968222 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.968228 | orchestrator | 2025-06-22 12:03:58.968233 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-06-22 12:03:58.968239 | orchestrator | Sunday 22 June 2025 12:03:17 +0000 (0:00:00.345) 0:05:50.487 *********** 2025-06-22 12:03:58.968265 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.968270 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.968279 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.968288 | orchestrator | 2025-06-22 12:03:58.968298 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-06-22 12:03:58.968307 | orchestrator | Sunday 22 June 2025 12:03:18 +0000 (0:00:01.000) 0:05:51.487 *********** 2025-06-22 12:03:58.968317 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:03:58.968327 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:03:58.968336 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:03:58.968342 | orchestrator | 2025-06-22 12:03:58.968347 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-06-22 12:03:58.968353 | orchestrator | Sunday 22 June 2025 12:03:19 +0000 (0:00:00.726) 0:05:52.214 *********** 2025-06-22 12:03:58.968402 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:03:58.968408 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:03:58.968413 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:03:58.968418 | orchestrator | 2025-06-22 12:03:58.968424 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-06-22 12:03:58.968429 | orchestrator | Sunday 22 June 2025 12:03:19 +0000 (0:00:00.357) 0:05:52.572 *********** 2025-06-22 12:03:58.968435 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:03:58.968440 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:03:58.968445 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:03:58.968451 | orchestrator | 2025-06-22 12:03:58.968456 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-06-22 12:03:58.968462 | orchestrator | Sunday 22 June 2025 12:03:20 +0000 (0:00:01.297) 0:05:53.869 *********** 2025-06-22 12:03:58.968467 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:03:58.968472 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:03:58.968483 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:03:58.968488 | orchestrator | 2025-06-22 12:03:58.968494 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-06-22 12:03:58.968499 | orchestrator | Sunday 22 June 2025 12:03:21 +0000 (0:00:00.892) 0:05:54.761 *********** 2025-06-22 12:03:58.968505 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:03:58.968510 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:03:58.968515 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:03:58.968521 | orchestrator | 2025-06-22 12:03:58.968526 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-06-22 12:03:58.968531 | orchestrator | Sunday 22 June 2025 12:03:22 +0000 (0:00:00.962) 0:05:55.724 *********** 2025-06-22 12:03:58.968537 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.968542 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.968548 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.968553 | orchestrator | 2025-06-22 12:03:58.968558 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-06-22 12:03:58.968564 | orchestrator | Sunday 22 June 2025 12:03:31 +0000 (0:00:08.889) 0:06:04.613 *********** 2025-06-22 12:03:58.968569 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:03:58.968575 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:03:58.968580 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:03:58.968585 | orchestrator | 2025-06-22 12:03:58.968591 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-06-22 12:03:58.968596 | orchestrator | Sunday 22 June 2025 12:03:32 +0000 (0:00:00.761) 0:06:05.375 *********** 2025-06-22 12:03:58.968602 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.968607 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.968612 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.968618 | orchestrator | 2025-06-22 12:03:58.968623 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-06-22 12:03:58.968628 | orchestrator | Sunday 22 June 2025 12:03:40 +0000 (0:00:08.642) 0:06:14.017 *********** 2025-06-22 12:03:58.968634 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:03:58.968639 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:03:58.968644 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:03:58.968650 | orchestrator | 2025-06-22 12:03:58.968655 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-06-22 12:03:58.968660 | orchestrator | Sunday 22 June 2025 12:03:44 +0000 (0:00:03.775) 0:06:17.792 *********** 2025-06-22 12:03:58.968667 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:03:58.968675 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:03:58.968690 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:03:58.968700 | orchestrator | 2025-06-22 12:03:58.968706 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-06-22 12:03:58.968711 | orchestrator | Sunday 22 June 2025 12:03:52 +0000 (0:00:08.081) 0:06:25.874 *********** 2025-06-22 12:03:58.968717 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.968722 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.968727 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.968733 | orchestrator | 2025-06-22 12:03:58.968738 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-06-22 12:03:58.968743 | orchestrator | Sunday 22 June 2025 12:03:53 +0000 (0:00:00.350) 0:06:26.224 *********** 2025-06-22 12:03:58.968749 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.968754 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.968759 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.968765 | orchestrator | 2025-06-22 12:03:58.968770 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-06-22 12:03:58.968776 | orchestrator | Sunday 22 June 2025 12:03:53 +0000 (0:00:00.789) 0:06:27.014 *********** 2025-06-22 12:03:58.968781 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.968786 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.968796 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.968805 | orchestrator | 2025-06-22 12:03:58.968810 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-06-22 12:03:58.968815 | orchestrator | Sunday 22 June 2025 12:03:54 +0000 (0:00:00.379) 0:06:27.394 *********** 2025-06-22 12:03:58.968820 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.968825 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.968829 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.968834 | orchestrator | 2025-06-22 12:03:58.968839 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-06-22 12:03:58.968844 | orchestrator | Sunday 22 June 2025 12:03:54 +0000 (0:00:00.416) 0:06:27.811 *********** 2025-06-22 12:03:58.968848 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.968853 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.968859 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.968866 | orchestrator | 2025-06-22 12:03:58.968875 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-06-22 12:03:58.968883 | orchestrator | Sunday 22 June 2025 12:03:55 +0000 (0:00:00.417) 0:06:28.228 *********** 2025-06-22 12:03:58.968889 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:03:58.968894 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:03:58.968901 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:03:58.968909 | orchestrator | 2025-06-22 12:03:58.968917 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-06-22 12:03:58.968922 | orchestrator | Sunday 22 June 2025 12:03:55 +0000 (0:00:00.779) 0:06:29.008 *********** 2025-06-22 12:03:58.968927 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:03:58.968932 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:03:58.968937 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:03:58.968942 | orchestrator | 2025-06-22 12:03:58.968947 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-06-22 12:03:58.968952 | orchestrator | Sunday 22 June 2025 12:03:56 +0000 (0:00:01.008) 0:06:30.017 *********** 2025-06-22 12:03:58.968956 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:03:58.968961 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:03:58.968966 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:03:58.968971 | orchestrator | 2025-06-22 12:03:58.968976 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:03:58.968981 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-22 12:03:58.968986 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-22 12:03:58.968991 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-22 12:03:58.968996 | orchestrator | 2025-06-22 12:03:58.969001 | orchestrator | 2025-06-22 12:03:58.969006 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:03:58.969011 | orchestrator | Sunday 22 June 2025 12:03:57 +0000 (0:00:00.827) 0:06:30.844 *********** 2025-06-22 12:03:58.969015 | orchestrator | =============================================================================== 2025-06-22 12:03:58.969020 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.89s 2025-06-22 12:03:58.969025 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.64s 2025-06-22 12:03:58.969030 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.08s 2025-06-22 12:03:58.969035 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 7.00s 2025-06-22 12:03:58.969040 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.25s 2025-06-22 12:03:58.969045 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.91s 2025-06-22 12:03:58.969049 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.35s 2025-06-22 12:03:58.969054 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.62s 2025-06-22 12:03:58.969063 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.59s 2025-06-22 12:03:58.969067 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.48s 2025-06-22 12:03:58.969072 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.35s 2025-06-22 12:03:58.969077 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.29s 2025-06-22 12:03:58.969082 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.22s 2025-06-22 12:03:58.969090 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.19s 2025-06-22 12:03:58.969095 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.16s 2025-06-22 12:03:58.969099 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.97s 2025-06-22 12:03:58.969104 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.87s 2025-06-22 12:03:58.969109 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.87s 2025-06-22 12:03:58.969114 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.83s 2025-06-22 12:03:58.969119 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 3.79s 2025-06-22 12:03:58.969124 | orchestrator | 2025-06-22 12:03:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:04:02.011674 | orchestrator | 2025-06-22 12:04:02 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:04:02.013698 | orchestrator | 2025-06-22 12:04:02 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:04:02.015716 | orchestrator | 2025-06-22 12:04:02 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:04:02.015825 | orchestrator | 2025-06-22 12:04:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:04:05.075154 | orchestrator | 2025-06-22 12:04:05 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:04:05.078228 | orchestrator | 2025-06-22 12:04:05 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:04:05.079697 | orchestrator | 2025-06-22 12:04:05 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:04:05.079872 | orchestrator | 2025-06-22 12:04:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:04:08.129920 | orchestrator | 2025-06-22 12:04:08 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:04:08.131107 | orchestrator | 2025-06-22 12:04:08 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:04:08.131140 | orchestrator | 2025-06-22 12:04:08 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:04:08.131153 | orchestrator | 2025-06-22 12:04:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:04:11.173045 | orchestrator | 2025-06-22 12:04:11 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:04:11.173167 | orchestrator | 2025-06-22 12:04:11 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:04:11.173575 | orchestrator | 2025-06-22 12:04:11 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:04:11.173605 | orchestrator | 2025-06-22 12:04:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:04:14.201066 | orchestrator | 2025-06-22 12:04:14 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:04:14.202869 | orchestrator | 2025-06-22 12:04:14 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:04:14.203330 | orchestrator | 2025-06-22 12:04:14 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:04:14.203357 | orchestrator | 2025-06-22 12:04:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:04:17.256495 | orchestrator | 2025-06-22 12:04:17 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:04:17.257498 | orchestrator | 2025-06-22 12:04:17 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:04:17.259819 | orchestrator | 2025-06-22 12:04:17 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:04:17.260039 | orchestrator | 2025-06-22 12:04:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:04:20.311024 | orchestrator | 2025-06-22 12:04:20 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:04:20.313423 | orchestrator | 2025-06-22 12:04:20 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:04:20.313459 | orchestrator | 2025-06-22 12:04:20 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:04:20.313472 | orchestrator | 2025-06-22 12:04:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:04:23.362421 | orchestrator | 2025-06-22 12:04:23 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:04:23.363384 | orchestrator | 2025-06-22 12:04:23 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:04:23.364585 | orchestrator | 2025-06-22 12:04:23 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:04:23.364611 | orchestrator | 2025-06-22 12:04:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:04:26.410327 | orchestrator | 2025-06-22 12:04:26 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:04:26.411892 | orchestrator | 2025-06-22 12:04:26 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:04:26.411933 | orchestrator | 2025-06-22 12:04:26 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:04:26.411946 | orchestrator | 2025-06-22 12:04:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:04:29.474103 | orchestrator | 2025-06-22 12:04:29 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:04:29.474213 | orchestrator | 2025-06-22 12:04:29 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:04:29.474227 | orchestrator | 2025-06-22 12:04:29 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:04:29.474239 | orchestrator | 2025-06-22 12:04:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:04:32.528266 | orchestrator | 2025-06-22 12:04:32 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:04:32.528369 | orchestrator | 2025-06-22 12:04:32 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:04:32.529895 | orchestrator | 2025-06-22 12:04:32 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:04:32.530139 | orchestrator | 2025-06-22 12:04:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:04:35.580403 | orchestrator | 2025-06-22 12:04:35 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:04:35.586511 | orchestrator | 2025-06-22 12:04:35 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:04:35.588033 | orchestrator | 2025-06-22 12:04:35 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:04:35.588150 | orchestrator | 2025-06-22 12:04:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:04:38.638594 | orchestrator | 2025-06-22 12:04:38 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:04:38.640796 | orchestrator | 2025-06-22 12:04:38 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:04:38.643792 | orchestrator | 2025-06-22 12:04:38 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:04:38.644276 | orchestrator | 2025-06-22 12:04:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:04:41.700818 | orchestrator | 2025-06-22 12:04:41 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:04:41.702693 | orchestrator | 2025-06-22 12:04:41 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:04:41.704776 | orchestrator | 2025-06-22 12:04:41 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:04:41.704805 | orchestrator | 2025-06-22 12:04:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:04:44.761544 | orchestrator | 2025-06-22 12:04:44 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:04:44.764076 | orchestrator | 2025-06-22 12:04:44 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:04:44.770361 | orchestrator | 2025-06-22 12:04:44 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:04:44.770400 | orchestrator | 2025-06-22 12:04:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:04:47.825972 | orchestrator | 2025-06-22 12:04:47 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:04:47.827721 | orchestrator | 2025-06-22 12:04:47 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:04:47.829418 | orchestrator | 2025-06-22 12:04:47 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:04:47.829507 | orchestrator | 2025-06-22 12:04:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:04:50.879179 | orchestrator | 2025-06-22 12:04:50 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:04:50.879598 | orchestrator | 2025-06-22 12:04:50 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:04:50.882901 | orchestrator | 2025-06-22 12:04:50 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:04:50.883059 | orchestrator | 2025-06-22 12:04:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:04:53.928327 | orchestrator | 2025-06-22 12:04:53 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:04:53.930459 | orchestrator | 2025-06-22 12:04:53 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:04:53.935186 | orchestrator | 2025-06-22 12:04:53 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:04:53.935214 | orchestrator | 2025-06-22 12:04:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:04:56.980969 | orchestrator | 2025-06-22 12:04:56 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:04:56.982197 | orchestrator | 2025-06-22 12:04:56 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:04:56.983865 | orchestrator | 2025-06-22 12:04:56 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:04:56.983899 | orchestrator | 2025-06-22 12:04:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:00.028703 | orchestrator | 2025-06-22 12:05:00 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:00.029698 | orchestrator | 2025-06-22 12:05:00 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:00.031631 | orchestrator | 2025-06-22 12:05:00 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:00.031670 | orchestrator | 2025-06-22 12:05:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:03.077221 | orchestrator | 2025-06-22 12:05:03 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:03.077325 | orchestrator | 2025-06-22 12:05:03 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:03.078089 | orchestrator | 2025-06-22 12:05:03 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:03.078119 | orchestrator | 2025-06-22 12:05:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:06.120101 | orchestrator | 2025-06-22 12:05:06 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:06.122151 | orchestrator | 2025-06-22 12:05:06 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:06.124596 | orchestrator | 2025-06-22 12:05:06 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:06.124635 | orchestrator | 2025-06-22 12:05:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:09.173930 | orchestrator | 2025-06-22 12:05:09 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:09.175510 | orchestrator | 2025-06-22 12:05:09 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:09.177084 | orchestrator | 2025-06-22 12:05:09 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:09.177230 | orchestrator | 2025-06-22 12:05:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:12.230389 | orchestrator | 2025-06-22 12:05:12 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:12.232115 | orchestrator | 2025-06-22 12:05:12 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:12.233752 | orchestrator | 2025-06-22 12:05:12 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:12.233798 | orchestrator | 2025-06-22 12:05:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:15.289997 | orchestrator | 2025-06-22 12:05:15 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:15.291364 | orchestrator | 2025-06-22 12:05:15 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:15.293135 | orchestrator | 2025-06-22 12:05:15 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:15.293182 | orchestrator | 2025-06-22 12:05:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:18.347623 | orchestrator | 2025-06-22 12:05:18 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:18.349752 | orchestrator | 2025-06-22 12:05:18 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:18.352504 | orchestrator | 2025-06-22 12:05:18 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:18.352541 | orchestrator | 2025-06-22 12:05:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:21.403682 | orchestrator | 2025-06-22 12:05:21 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:21.404249 | orchestrator | 2025-06-22 12:05:21 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:21.405190 | orchestrator | 2025-06-22 12:05:21 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:21.405397 | orchestrator | 2025-06-22 12:05:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:24.455010 | orchestrator | 2025-06-22 12:05:24 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:24.455804 | orchestrator | 2025-06-22 12:05:24 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:24.458122 | orchestrator | 2025-06-22 12:05:24 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:24.458363 | orchestrator | 2025-06-22 12:05:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:27.512704 | orchestrator | 2025-06-22 12:05:27 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:27.514927 | orchestrator | 2025-06-22 12:05:27 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:27.517107 | orchestrator | 2025-06-22 12:05:27 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:27.517145 | orchestrator | 2025-06-22 12:05:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:30.568419 | orchestrator | 2025-06-22 12:05:30 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:30.569421 | orchestrator | 2025-06-22 12:05:30 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:30.571023 | orchestrator | 2025-06-22 12:05:30 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:30.571058 | orchestrator | 2025-06-22 12:05:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:33.619626 | orchestrator | 2025-06-22 12:05:33 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:33.621476 | orchestrator | 2025-06-22 12:05:33 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:33.624033 | orchestrator | 2025-06-22 12:05:33 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:33.624074 | orchestrator | 2025-06-22 12:05:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:36.663104 | orchestrator | 2025-06-22 12:05:36 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:36.663987 | orchestrator | 2025-06-22 12:05:36 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:36.665537 | orchestrator | 2025-06-22 12:05:36 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:36.665568 | orchestrator | 2025-06-22 12:05:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:39.708353 | orchestrator | 2025-06-22 12:05:39 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:39.709728 | orchestrator | 2025-06-22 12:05:39 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:39.711296 | orchestrator | 2025-06-22 12:05:39 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:39.711320 | orchestrator | 2025-06-22 12:05:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:42.758427 | orchestrator | 2025-06-22 12:05:42 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:42.760234 | orchestrator | 2025-06-22 12:05:42 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:42.762983 | orchestrator | 2025-06-22 12:05:42 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:42.763726 | orchestrator | 2025-06-22 12:05:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:45.814614 | orchestrator | 2025-06-22 12:05:45 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:45.815777 | orchestrator | 2025-06-22 12:05:45 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:45.819164 | orchestrator | 2025-06-22 12:05:45 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:45.819284 | orchestrator | 2025-06-22 12:05:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:48.867960 | orchestrator | 2025-06-22 12:05:48 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:48.869704 | orchestrator | 2025-06-22 12:05:48 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:48.873082 | orchestrator | 2025-06-22 12:05:48 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:48.873115 | orchestrator | 2025-06-22 12:05:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:51.919369 | orchestrator | 2025-06-22 12:05:51 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:51.920995 | orchestrator | 2025-06-22 12:05:51 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:51.923565 | orchestrator | 2025-06-22 12:05:51 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:51.923594 | orchestrator | 2025-06-22 12:05:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:54.968242 | orchestrator | 2025-06-22 12:05:54 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:54.969168 | orchestrator | 2025-06-22 12:05:54 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:54.969967 | orchestrator | 2025-06-22 12:05:54 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:54.970002 | orchestrator | 2025-06-22 12:05:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:05:58.018200 | orchestrator | 2025-06-22 12:05:58 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:05:58.021160 | orchestrator | 2025-06-22 12:05:58 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:05:58.021989 | orchestrator | 2025-06-22 12:05:58 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:05:58.022013 | orchestrator | 2025-06-22 12:05:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:01.080307 | orchestrator | 2025-06-22 12:06:01 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:01.080452 | orchestrator | 2025-06-22 12:06:01 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:06:01.080683 | orchestrator | 2025-06-22 12:06:01 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:01.080708 | orchestrator | 2025-06-22 12:06:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:04.133226 | orchestrator | 2025-06-22 12:06:04 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:04.135488 | orchestrator | 2025-06-22 12:06:04 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:06:04.136867 | orchestrator | 2025-06-22 12:06:04 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:04.137435 | orchestrator | 2025-06-22 12:06:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:07.184735 | orchestrator | 2025-06-22 12:06:07 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:07.186923 | orchestrator | 2025-06-22 12:06:07 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:06:07.189122 | orchestrator | 2025-06-22 12:06:07 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:07.189531 | orchestrator | 2025-06-22 12:06:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:10.225415 | orchestrator | 2025-06-22 12:06:10 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:10.226590 | orchestrator | 2025-06-22 12:06:10 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state STARTED 2025-06-22 12:06:10.228097 | orchestrator | 2025-06-22 12:06:10 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:10.228322 | orchestrator | 2025-06-22 12:06:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:13.272995 | orchestrator | 2025-06-22 12:06:13 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:13.279255 | orchestrator | 2025-06-22 12:06:13.279331 | orchestrator | 2025-06-22 12:06:13 | INFO  | Task db86b6a2-a684-4f38-b55c-9876142aaa48 is in state SUCCESS 2025-06-22 12:06:13.281776 | orchestrator | 2025-06-22 12:06:13.281822 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-06-22 12:06:13.281836 | orchestrator | 2025-06-22 12:06:13.281847 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-22 12:06:13.281858 | orchestrator | Sunday 22 June 2025 11:54:16 +0000 (0:00:00.750) 0:00:00.750 *********** 2025-06-22 12:06:13.281870 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.281946 | orchestrator | 2025-06-22 12:06:13.281962 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-22 12:06:13.281974 | orchestrator | Sunday 22 June 2025 11:54:17 +0000 (0:00:01.252) 0:00:02.002 *********** 2025-06-22 12:06:13.281986 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.281998 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.282009 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.282166 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.282188 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.282199 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.282210 | orchestrator | 2025-06-22 12:06:13.282222 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-22 12:06:13.282233 | orchestrator | Sunday 22 June 2025 11:54:19 +0000 (0:00:01.565) 0:00:03.568 *********** 2025-06-22 12:06:13.282243 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.282281 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.282292 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.282303 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.282314 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.282324 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.282335 | orchestrator | 2025-06-22 12:06:13.282346 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-22 12:06:13.282357 | orchestrator | Sunday 22 June 2025 11:54:19 +0000 (0:00:00.766) 0:00:04.335 *********** 2025-06-22 12:06:13.282368 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.282381 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.282393 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.282427 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.282439 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.282452 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.282465 | orchestrator | 2025-06-22 12:06:13.282532 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-22 12:06:13.282545 | orchestrator | Sunday 22 June 2025 11:54:20 +0000 (0:00:01.011) 0:00:05.347 *********** 2025-06-22 12:06:13.282557 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.282569 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.282610 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.282622 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.282635 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.282646 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.282658 | orchestrator | 2025-06-22 12:06:13.282670 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-22 12:06:13.282682 | orchestrator | Sunday 22 June 2025 11:54:21 +0000 (0:00:00.997) 0:00:06.345 *********** 2025-06-22 12:06:13.282695 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.282707 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.282719 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.282731 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.282800 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.282812 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.282823 | orchestrator | 2025-06-22 12:06:13.282834 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-22 12:06:13.282844 | orchestrator | Sunday 22 June 2025 11:54:22 +0000 (0:00:00.849) 0:00:07.194 *********** 2025-06-22 12:06:13.282855 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.282866 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.282876 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.282887 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.282898 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.282927 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.282939 | orchestrator | 2025-06-22 12:06:13.282950 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-22 12:06:13.282961 | orchestrator | Sunday 22 June 2025 11:54:24 +0000 (0:00:01.302) 0:00:08.497 *********** 2025-06-22 12:06:13.282972 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.282984 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.282994 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.283005 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.283016 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.283026 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.283037 | orchestrator | 2025-06-22 12:06:13.283048 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-22 12:06:13.283101 | orchestrator | Sunday 22 June 2025 11:54:25 +0000 (0:00:01.078) 0:00:09.575 *********** 2025-06-22 12:06:13.283138 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.283150 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.283160 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.283171 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.283182 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.283192 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.283203 | orchestrator | 2025-06-22 12:06:13.283265 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-22 12:06:13.283276 | orchestrator | Sunday 22 June 2025 11:54:26 +0000 (0:00:00.963) 0:00:10.538 *********** 2025-06-22 12:06:13.283287 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 12:06:13.283298 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 12:06:13.283308 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 12:06:13.283319 | orchestrator | 2025-06-22 12:06:13.283330 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-22 12:06:13.283340 | orchestrator | Sunday 22 June 2025 11:54:26 +0000 (0:00:00.598) 0:00:11.137 *********** 2025-06-22 12:06:13.283359 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.283370 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.283381 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.283392 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.283444 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.283455 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.283465 | orchestrator | 2025-06-22 12:06:13.283497 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-22 12:06:13.283509 | orchestrator | Sunday 22 June 2025 11:54:27 +0000 (0:00:00.890) 0:00:12.027 *********** 2025-06-22 12:06:13.283520 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 12:06:13.283531 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 12:06:13.283542 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 12:06:13.283552 | orchestrator | 2025-06-22 12:06:13.283563 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-22 12:06:13.283574 | orchestrator | Sunday 22 June 2025 11:54:30 +0000 (0:00:03.008) 0:00:15.036 *********** 2025-06-22 12:06:13.283584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 12:06:13.283595 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 12:06:13.283606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 12:06:13.283617 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.283627 | orchestrator | 2025-06-22 12:06:13.283638 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-22 12:06:13.283649 | orchestrator | Sunday 22 June 2025 11:54:31 +0000 (0:00:01.139) 0:00:16.176 *********** 2025-06-22 12:06:13.283661 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.283674 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.283685 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.283696 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.283707 | orchestrator | 2025-06-22 12:06:13.283718 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-22 12:06:13.283729 | orchestrator | Sunday 22 June 2025 11:54:32 +0000 (0:00:01.017) 0:00:17.193 *********** 2025-06-22 12:06:13.283741 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.283754 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.283766 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.283836 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.283848 | orchestrator | 2025-06-22 12:06:13.283859 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-22 12:06:13.283870 | orchestrator | Sunday 22 June 2025 11:54:33 +0000 (0:00:00.299) 0:00:17.493 *********** 2025-06-22 12:06:13.283890 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-22 11:54:28.239325', 'end': '2025-06-22 11:54:28.518034', 'delta': '0:00:00.278709', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.283942 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-22 11:54:29.319854', 'end': '2025-06-22 11:54:29.627498', 'delta': '0:00:00.307644', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.283956 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-22 11:54:30.135205', 'end': '2025-06-22 11:54:30.425685', 'delta': '0:00:00.290480', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.283967 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.283978 | orchestrator | 2025-06-22 12:06:13.283989 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-22 12:06:13.284091 | orchestrator | Sunday 22 June 2025 11:54:33 +0000 (0:00:00.192) 0:00:17.685 *********** 2025-06-22 12:06:13.284103 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.284114 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.284125 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.284135 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.284146 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.284157 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.284168 | orchestrator | 2025-06-22 12:06:13.284178 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-22 12:06:13.284190 | orchestrator | Sunday 22 June 2025 11:54:34 +0000 (0:00:01.473) 0:00:19.159 *********** 2025-06-22 12:06:13.284200 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 12:06:13.284211 | orchestrator | 2025-06-22 12:06:13.284222 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-22 12:06:13.284246 | orchestrator | Sunday 22 June 2025 11:54:35 +0000 (0:00:00.703) 0:00:19.862 *********** 2025-06-22 12:06:13.284258 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.284269 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.284280 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.284290 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.284301 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.284312 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.284323 | orchestrator | 2025-06-22 12:06:13.284334 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-22 12:06:13.284345 | orchestrator | Sunday 22 June 2025 11:54:36 +0000 (0:00:01.184) 0:00:21.046 *********** 2025-06-22 12:06:13.284356 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.284366 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.284377 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.284388 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.284399 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.284410 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.284420 | orchestrator | 2025-06-22 12:06:13.284431 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-22 12:06:13.284442 | orchestrator | Sunday 22 June 2025 11:54:38 +0000 (0:00:01.374) 0:00:22.421 *********** 2025-06-22 12:06:13.284453 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.284463 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.284474 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.284485 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.284558 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.284569 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.284580 | orchestrator | 2025-06-22 12:06:13.284591 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-22 12:06:13.284602 | orchestrator | Sunday 22 June 2025 11:54:39 +0000 (0:00:01.280) 0:00:23.701 *********** 2025-06-22 12:06:13.284612 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.284623 | orchestrator | 2025-06-22 12:06:13.284634 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-22 12:06:13.284671 | orchestrator | Sunday 22 June 2025 11:54:39 +0000 (0:00:00.122) 0:00:23.824 *********** 2025-06-22 12:06:13.284682 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.284722 | orchestrator | 2025-06-22 12:06:13.284734 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-22 12:06:13.284745 | orchestrator | Sunday 22 June 2025 11:54:39 +0000 (0:00:00.285) 0:00:24.109 *********** 2025-06-22 12:06:13.284756 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.284767 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.284777 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.284788 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.284798 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.284809 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.284820 | orchestrator | 2025-06-22 12:06:13.284853 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-22 12:06:13.284865 | orchestrator | Sunday 22 June 2025 11:54:40 +0000 (0:00:00.949) 0:00:25.059 *********** 2025-06-22 12:06:13.284876 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.284886 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.284897 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.284954 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.284966 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.284977 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.285033 | orchestrator | 2025-06-22 12:06:13.285046 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-22 12:06:13.285078 | orchestrator | Sunday 22 June 2025 11:54:42 +0000 (0:00:01.452) 0:00:26.511 *********** 2025-06-22 12:06:13.285126 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.285138 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.285157 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.285168 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.285178 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.285189 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.285199 | orchestrator | 2025-06-22 12:06:13.285244 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-22 12:06:13.285257 | orchestrator | Sunday 22 June 2025 11:54:42 +0000 (0:00:00.810) 0:00:27.321 *********** 2025-06-22 12:06:13.285268 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.285279 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.285289 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.285300 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.285310 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.285321 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.285331 | orchestrator | 2025-06-22 12:06:13.285375 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-22 12:06:13.285385 | orchestrator | Sunday 22 June 2025 11:54:43 +0000 (0:00:01.033) 0:00:28.355 *********** 2025-06-22 12:06:13.285394 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.285404 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.285413 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.285422 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.285432 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.285441 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.285451 | orchestrator | 2025-06-22 12:06:13.285460 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-22 12:06:13.285470 | orchestrator | Sunday 22 June 2025 11:54:44 +0000 (0:00:00.712) 0:00:29.067 *********** 2025-06-22 12:06:13.285479 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.285488 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.285498 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.285507 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.285517 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.285526 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.285536 | orchestrator | 2025-06-22 12:06:13.285545 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-22 12:06:13.285555 | orchestrator | Sunday 22 June 2025 11:54:45 +0000 (0:00:00.829) 0:00:29.896 *********** 2025-06-22 12:06:13.285565 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.285574 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.285584 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.285593 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.285623 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.285633 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.285664 | orchestrator | 2025-06-22 12:06:13.285674 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-22 12:06:13.285684 | orchestrator | Sunday 22 June 2025 11:54:46 +0000 (0:00:01.000) 0:00:30.897 *********** 2025-06-22 12:06:13.285695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6ffadd37--6b10--5a4f--8f0b--2da52ae5008f-osd--block--6ffadd37--6b10--5a4f--8f0b--2da52ae5008f', 'dm-uuid-LVM-5z2kMErXdzqhz6sGodEbou1xMVtAcvKqvPv92Sa4BaDuu3K61FJbBQLqXSUrKRT2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.285706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b51a6ec--8722--57c7--ad6b--56758d62ede6-osd--block--0b51a6ec--8722--57c7--ad6b--56758d62ede6', 'dm-uuid-LVM-DmuDx4q0eg9c7S39c7306HiSMFddoeKvpLAa0XFHzC1czDgajcKZPlc2LLeS5Lax'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.285734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.285747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.285757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.285767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.285777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.285809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d90edff2--979c--5e5e--98e2--f02394d35fb4-osd--block--d90edff2--979c--5e5e--98e2--f02394d35fb4', 'dm-uuid-LVM-x1R5ovTZjx0BSQusAolddecCSdxeaymHnPe0JqCYsauL0BU6MCAzn1rRsQCc2u3m'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.285820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9de1692c--afc0--5cdb--8a59--e564d6a096fc-osd--block--9de1692c--afc0--5cdb--8a59--e564d6a096fc', 'dm-uuid-LVM-ag0D9SVtA7CjPrJ09lGiURgPqhP0rrh81ZbsSXtAxPezQngHKhcacKIqTafegz5S'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.285892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.285924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.285945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.285956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.285966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.285976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.285986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.285998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part1', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part14', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part15', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part16', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.286061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.286076 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6ffadd37--6b10--5a4f--8f0b--2da52ae5008f-osd--block--6ffadd37--6b10--5a4f--8f0b--2da52ae5008f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GkFboE-SZ5j-PxRK-4llI-c7Kk-Tsfz-iL7GcI', 'scsi-0QEMU_QEMU_HARDDISK_4b47f8cd-db2a-4bea-898d-3d48c49a84c2', 'scsi-SQEMU_QEMU_HARDDISK_4b47f8cd-db2a-4bea-898d-3d48c49a84c2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.286087 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.286097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.286107 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0b51a6ec--8722--57c7--ad6b--56758d62ede6-osd--block--0b51a6ec--8722--57c7--ad6b--56758d62ede6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D4mnQ5-hyef-SylW-6zGM-naDV-MTqz-6xfjrr', 'scsi-0QEMU_QEMU_HARDDISK_7610229b-d7bf-450f-9964-1d42e936a357', 'scsi-SQEMU_QEMU_HARDDISK_7610229b-d7bf-450f-9964-1d42e936a357'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.286117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.286145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part1', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part14', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part15', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part16', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.286158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a4028de--648e--5a19--94a5--5dc0f00dede1-osd--block--8a4028de--648e--5a19--94a5--5dc0f00dede1', 'dm-uuid-LVM-acpfe85L5vZuA4u1jglxT9JbzXosiaumIcUM3C65UsG6SE4zjN6U4I4NdDNSd1lJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.286168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c288123e-75d1-4d08-8561-55f7fbbd7c1b', 'scsi-SQEMU_QEMU_HARDDISK_c288123e-75d1-4d08-8561-55f7fbbd7c1b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.286180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d90edff2--979c--5e5e--98e2--f02394d35fb4-osd--block--d90edff2--979c--5e5e--98e2--f02394d35fb4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XIFEhX-1znK-U8Q8-sqVA-7VrO-xW09-7AWieS', 'scsi-0QEMU_QEMU_HARDDISK_95ca9be4-ae4c-4603-a11a-c98b5f55b273', 'scsi-SQEMU_QEMU_HARDDISK_95ca9be4-ae4c-4603-a11a-c98b5f55b273'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.286196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1d622d46--9f3b--5fb0--a039--cce126484330-osd--block--1d622d46--9f3b--5fb0--a039--cce126484330', 'dm-uuid-LVM-XLo2EjF3JS9KQI13FFIVCJ739Xx6PhXo2Ft1rKtLd9VZcEz84kQEE9xSFtmu9pZd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.287594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.287664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9de1692c--afc0--5cdb--8a59--e564d6a096fc-osd--block--9de1692c--afc0--5cdb--8a59--e564d6a096fc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wzx2v6-UW7k-0R9Q-SHgb-JNbF-gFii-bsHINm', 'scsi-0QEMU_QEMU_HARDDISK_899f0377-b87c-421a-9d44-3bd393f5c125', 'scsi-SQEMU_QEMU_HARDDISK_899f0377-b87c-421a-9d44-3bd393f5c125'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.287681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.287694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-11-16-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.287706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.287717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_060f7999-6812-4095-99a7-aa228581a5cf', 'scsi-SQEMU_QEMU_HARDDISK_060f7999-6812-4095-99a7-aa228581a5cf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.287749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.287761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-11-16-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.287792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.287804 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.287816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.287827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.287838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.287849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.287883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part1', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part14', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part15', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part16', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.287898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.287936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.287949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8a4028de--648e--5a19--94a5--5dc0f00dede1-osd--block--8a4028de--648e--5a19--94a5--5dc0f00dede1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fOb4E1-Vi2m-r6V2-FMAw-MsLk-xAmo-CWZQUj', 'scsi-0QEMU_QEMU_HARDDISK_0234f42c-6d02-44b8-b796-e801f7c6659f', 'scsi-SQEMU_QEMU_HARDDISK_0234f42c-6d02-44b8-b796-e801f7c6659f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.287961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.287980 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1d622d46--9f3b--5fb0--a039--cce126484330-osd--block--1d622d46--9f3b--5fb0--a039--cce126484330'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7WG3V5-C9T6-AL8i-e53y-TJpC-FvQt-dmIRlR', 'scsi-0QEMU_QEMU_HARDDISK_a273c01c-52c4-42f8-a181-d91a87ff3a5e', 'scsi-SQEMU_QEMU_HARDDISK_a273c01c-52c4-42f8-a181-d91a87ff3a5e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.287992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.288004 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.288017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.288040 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a129606c-fab1-48ed-9350-9d2eafddbd52', 'scsi-SQEMU_QEMU_HARDDISK_a129606c-fab1-48ed-9350-9d2eafddbd52'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.288053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.288065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-11-16-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.288077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683', 'scsi-SQEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683-part1', 'scsi-SQEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683-part14', 'scsi-SQEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683-part15', 'scsi-SQEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683-part16', 'scsi-SQEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.288106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-11-16-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.288119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.288133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.288145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.288157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.288177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.288189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.288202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.288215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.288242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b', 'scsi-SQEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.288256 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.288276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-11-16-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.288289 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.288301 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.288314 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.288326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.288340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.288352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.288366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.288389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.288403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.288416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.288429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:06:13.288450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2', 'scsi-SQEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2-part1', 'scsi-SQEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2-part14', 'scsi-SQEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2-part15', 'scsi-SQEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2-part16', 'scsi-SQEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.288481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-11-16-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:06:13.288495 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.288508 | orchestrator | 2025-06-22 12:06:13.288520 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-22 12:06:13.288531 | orchestrator | Sunday 22 June 2025 11:54:48 +0000 (0:00:01.980) 0:00:32.877 *********** 2025-06-22 12:06:13.288543 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6ffadd37--6b10--5a4f--8f0b--2da52ae5008f-osd--block--6ffadd37--6b10--5a4f--8f0b--2da52ae5008f', 'dm-uuid-LVM-5z2kMErXdzqhz6sGodEbou1xMVtAcvKqvPv92Sa4BaDuu3K61FJbBQLqXSUrKRT2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288564 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d90edff2--979c--5e5e--98e2--f02394d35fb4-osd--block--d90edff2--979c--5e5e--98e2--f02394d35fb4', 'dm-uuid-LVM-x1R5ovTZjx0BSQusAolddecCSdxeaymHnPe0JqCYsauL0BU6MCAzn1rRsQCc2u3m'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288575 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9de1692c--afc0--5cdb--8a59--e564d6a096fc-osd--block--9de1692c--afc0--5cdb--8a59--e564d6a096fc', 'dm-uuid-LVM-ag0D9SVtA7CjPrJ09lGiURgPqhP0rrh81ZbsSXtAxPezQngHKhcacKIqTafegz5S'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288587 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b51a6ec--8722--57c7--ad6b--56758d62ede6-osd--block--0b51a6ec--8722--57c7--ad6b--56758d62ede6', 'dm-uuid-LVM-DmuDx4q0eg9c7S39c7306HiSMFddoeKvpLAa0XFHzC1czDgajcKZPlc2LLeS5Lax'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288603 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288620 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288632 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288649 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288660 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288671 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288682 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288703 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288715 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288732 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288743 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288754 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288766 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288790 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part1', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part14', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part15', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part16', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288809 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6ffadd37--6b10--5a4f--8f0b--2da52ae5008f-osd--block--6ffadd37--6b10--5a4f--8f0b--2da52ae5008f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GkFboE-SZ5j-PxRK-4llI-c7Kk-Tsfz-iL7GcI', 'scsi-0QEMU_QEMU_HARDDISK_4b47f8cd-db2a-4bea-898d-3d48c49a84c2', 'scsi-SQEMU_QEMU_HARDDISK_4b47f8cd-db2a-4bea-898d-3d48c49a84c2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288821 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288832 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288850 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0b51a6ec--8722--57c7--ad6b--56758d62ede6-osd--block--0b51a6ec--8722--57c7--ad6b--56758d62ede6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D4mnQ5-hyef-SylW-6zGM-naDV-MTqz-6xfjrr', 'scsi-0QEMU_QEMU_HARDDISK_7610229b-d7bf-450f-9964-1d42e936a357', 'scsi-SQEMU_QEMU_HARDDISK_7610229b-d7bf-450f-9964-1d42e936a357'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288868 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288957 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c288123e-75d1-4d08-8561-55f7fbbd7c1b', 'scsi-SQEMU_QEMU_HARDDISK_c288123e-75d1-4d08-8561-55f7fbbd7c1b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.288974 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a4028de--648e--5a19--94a5--5dc0f00dede1-osd--block--8a4028de--648e--5a19--94a5--5dc0f00dede1', 'dm-uuid-LVM-acpfe85L5vZuA4u1jglxT9JbzXosiaumIcUM3C65UsG6SE4zjN6U4I4NdDNSd1lJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289001 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part1', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part14', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part15', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part16', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289022 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-11-16-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289034 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1d622d46--9f3b--5fb0--a039--cce126484330-osd--block--1d622d46--9f3b--5fb0--a039--cce126484330', 'dm-uuid-LVM-XLo2EjF3JS9KQI13FFIVCJ739Xx6PhXo2Ft1rKtLd9VZcEz84kQEE9xSFtmu9pZd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289045 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d90edff2--979c--5e5e--98e2--f02394d35fb4-osd--block--d90edff2--979c--5e5e--98e2--f02394d35fb4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XIFEhX-1znK-U8Q8-sqVA-7VrO-xW09-7AWieS', 'scsi-0QEMU_QEMU_HARDDISK_95ca9be4-ae4c-4603-a11a-c98b5f55b273', 'scsi-SQEMU_QEMU_HARDDISK_95ca9be4-ae4c-4603-a11a-c98b5f55b273'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289069 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9de1692c--afc0--5cdb--8a59--e564d6a096fc-osd--block--9de1692c--afc0--5cdb--8a59--e564d6a096fc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wzx2v6-UW7k-0R9Q-SHgb-JNbF-gFii-bsHINm', 'scsi-0QEMU_QEMU_HARDDISK_899f0377-b87c-421a-9d44-3bd393f5c125', 'scsi-SQEMU_QEMU_HARDDISK_899f0377-b87c-421a-9d44-3bd393f5c125'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289087 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289098 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_060f7999-6812-4095-99a7-aa228581a5cf', 'scsi-SQEMU_QEMU_HARDDISK_060f7999-6812-4095-99a7-aa228581a5cf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289110 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-11-16-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289121 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.289132 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289143 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289164 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289185 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289197 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289208 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289219 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289230 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289255 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part1', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part14', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part15', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part16', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289274 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289285 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8a4028de--648e--5a19--94a5--5dc0f00dede1-osd--block--8a4028de--648e--5a19--94a5--5dc0f00dede1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fOb4E1-Vi2m-r6V2-FMAw-MsLk-xAmo-CWZQUj', 'scsi-0QEMU_QEMU_HARDDISK_0234f42c-6d02-44b8-b796-e801f7c6659f', 'scsi-SQEMU_QEMU_HARDDISK_0234f42c-6d02-44b8-b796-e801f7c6659f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289297 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289448 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1d622d46--9f3b--5fb0--a039--cce126484330-osd--block--1d622d46--9f3b--5fb0--a039--cce126484330'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7WG3V5-C9T6-AL8i-e53y-TJpC-FvQt-dmIRlR', 'scsi-0QEMU_QEMU_HARDDISK_a273c01c-52c4-42f8-a181-d91a87ff3a5e', 'scsi-SQEMU_QEMU_HARDDISK_a273c01c-52c4-42f8-a181-d91a87ff3a5e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289474 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289486 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a129606c-fab1-48ed-9350-9d2eafddbd52', 'scsi-SQEMU_QEMU_HARDDISK_a129606c-fab1-48ed-9350-9d2eafddbd52'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289498 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289510 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-11-16-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289521 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289549 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289562 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289574 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683', 'scsi-SQEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683-part1', 'scsi-SQEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683-part14', 'scsi-SQEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683-part15', 'scsi-SQEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683-part16', 'scsi-SQEMU_QEMU_HARDDISK_9da8c542-f304-48e2-b337-ad2903d45683-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289591 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-11-16-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289613 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.289625 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289636 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289648 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289659 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289670 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289681 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289708 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289720 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.289732 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289744 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b', 'scsi-SQEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4c4cccf-6106-481f-a690-70f34a54183b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289762 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.289774 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-11-16-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289785 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.289818 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289846 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289875 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289894 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.289972 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.290004 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.290114 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.290132 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.290147 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2', 'scsi-SQEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2-part1', 'scsi-SQEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2-part14', 'scsi-SQEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2-part15', 'scsi-SQEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2-part16', 'scsi-SQEMU_QEMU_HARDDISK_6a2610a1-2e6a-4331-b268-14d7657bafb2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.290176 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-11-16-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:06:13.290189 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.290201 | orchestrator | 2025-06-22 12:06:13.290214 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-22 12:06:13.290228 | orchestrator | Sunday 22 June 2025 11:54:49 +0000 (0:00:01.500) 0:00:34.378 *********** 2025-06-22 12:06:13.290250 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.290264 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.290276 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.290288 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.290301 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.290313 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.290325 | orchestrator | 2025-06-22 12:06:13.290337 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-22 12:06:13.290349 | orchestrator | Sunday 22 June 2025 11:54:51 +0000 (0:00:01.492) 0:00:35.871 *********** 2025-06-22 12:06:13.290362 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.290374 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.290386 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.290398 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.290409 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.290419 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.290430 | orchestrator | 2025-06-22 12:06:13.290441 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-22 12:06:13.290451 | orchestrator | Sunday 22 June 2025 11:54:52 +0000 (0:00:01.215) 0:00:37.086 *********** 2025-06-22 12:06:13.290462 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.290473 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.290483 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.290492 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.290501 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.290511 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.290520 | orchestrator | 2025-06-22 12:06:13.290530 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-22 12:06:13.290539 | orchestrator | Sunday 22 June 2025 11:54:53 +0000 (0:00:00.589) 0:00:37.676 *********** 2025-06-22 12:06:13.290549 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.290558 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.290568 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.290577 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.290587 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.290596 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.290606 | orchestrator | 2025-06-22 12:06:13.290615 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-22 12:06:13.290625 | orchestrator | Sunday 22 June 2025 11:54:53 +0000 (0:00:00.642) 0:00:38.318 *********** 2025-06-22 12:06:13.290634 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.290644 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.290659 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.290669 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.290678 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.290688 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.290697 | orchestrator | 2025-06-22 12:06:13.290707 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-22 12:06:13.290716 | orchestrator | Sunday 22 June 2025 11:54:55 +0000 (0:00:01.280) 0:00:39.599 *********** 2025-06-22 12:06:13.290725 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.290735 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.290744 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.290754 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.290763 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.290772 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.290782 | orchestrator | 2025-06-22 12:06:13.290791 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-22 12:06:13.290801 | orchestrator | Sunday 22 June 2025 11:54:56 +0000 (0:00:00.964) 0:00:40.563 *********** 2025-06-22 12:06:13.290810 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-22 12:06:13.290820 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-22 12:06:13.290829 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-22 12:06:13.290839 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-22 12:06:13.290848 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-22 12:06:13.290858 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 12:06:13.290867 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-22 12:06:13.290876 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-22 12:06:13.290886 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-22 12:06:13.290895 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-22 12:06:13.290930 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-22 12:06:13.290940 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-06-22 12:06:13.290950 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-22 12:06:13.290960 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-06-22 12:06:13.290969 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-06-22 12:06:13.290979 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-06-22 12:06:13.290989 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-06-22 12:06:13.290999 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-06-22 12:06:13.291009 | orchestrator | 2025-06-22 12:06:13.291019 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-22 12:06:13.291029 | orchestrator | Sunday 22 June 2025 11:54:58 +0000 (0:00:02.557) 0:00:43.121 *********** 2025-06-22 12:06:13.291039 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 12:06:13.291049 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 12:06:13.291058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 12:06:13.291068 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.291077 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-22 12:06:13.291087 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-22 12:06:13.291097 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-22 12:06:13.291106 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-22 12:06:13.291116 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-22 12:06:13.291126 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-22 12:06:13.291150 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.291166 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 12:06:13.291175 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 12:06:13.291193 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 12:06:13.291203 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.291213 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-22 12:06:13.291222 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-22 12:06:13.291232 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-22 12:06:13.291241 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.291251 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.291260 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-22 12:06:13.291269 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-22 12:06:13.291279 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-22 12:06:13.291288 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.291298 | orchestrator | 2025-06-22 12:06:13.291307 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-22 12:06:13.291317 | orchestrator | Sunday 22 June 2025 11:54:59 +0000 (0:00:00.513) 0:00:43.634 *********** 2025-06-22 12:06:13.291327 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.291336 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.291346 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.291356 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.291365 | orchestrator | 2025-06-22 12:06:13.291375 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-22 12:06:13.291385 | orchestrator | Sunday 22 June 2025 11:55:00 +0000 (0:00:01.324) 0:00:44.958 *********** 2025-06-22 12:06:13.291395 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.291404 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.291414 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.291423 | orchestrator | 2025-06-22 12:06:13.291433 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-22 12:06:13.291442 | orchestrator | Sunday 22 June 2025 11:55:00 +0000 (0:00:00.362) 0:00:45.320 *********** 2025-06-22 12:06:13.291452 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.291462 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.291471 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.291480 | orchestrator | 2025-06-22 12:06:13.291490 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-22 12:06:13.291499 | orchestrator | Sunday 22 June 2025 11:55:01 +0000 (0:00:00.399) 0:00:45.720 *********** 2025-06-22 12:06:13.291509 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.291519 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.291528 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.291538 | orchestrator | 2025-06-22 12:06:13.291547 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-22 12:06:13.291557 | orchestrator | Sunday 22 June 2025 11:55:01 +0000 (0:00:00.287) 0:00:46.007 *********** 2025-06-22 12:06:13.291566 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.291576 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.291586 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.291595 | orchestrator | 2025-06-22 12:06:13.291605 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-22 12:06:13.291615 | orchestrator | Sunday 22 June 2025 11:55:02 +0000 (0:00:00.496) 0:00:46.504 *********** 2025-06-22 12:06:13.291624 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 12:06:13.291634 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 12:06:13.291643 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 12:06:13.291652 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.291662 | orchestrator | 2025-06-22 12:06:13.291672 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-22 12:06:13.291689 | orchestrator | Sunday 22 June 2025 11:55:02 +0000 (0:00:00.361) 0:00:46.865 *********** 2025-06-22 12:06:13.291699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 12:06:13.291708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 12:06:13.291718 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 12:06:13.291728 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.291737 | orchestrator | 2025-06-22 12:06:13.291747 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-22 12:06:13.291756 | orchestrator | Sunday 22 June 2025 11:55:02 +0000 (0:00:00.418) 0:00:47.284 *********** 2025-06-22 12:06:13.291766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 12:06:13.291775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 12:06:13.291785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 12:06:13.291794 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.291804 | orchestrator | 2025-06-22 12:06:13.291813 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-22 12:06:13.291823 | orchestrator | Sunday 22 June 2025 11:55:03 +0000 (0:00:00.504) 0:00:47.788 *********** 2025-06-22 12:06:13.291833 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.291842 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.291852 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.291861 | orchestrator | 2025-06-22 12:06:13.291871 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-22 12:06:13.291881 | orchestrator | Sunday 22 June 2025 11:55:03 +0000 (0:00:00.435) 0:00:48.223 *********** 2025-06-22 12:06:13.291890 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-22 12:06:13.291916 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-22 12:06:13.291928 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-22 12:06:13.291938 | orchestrator | 2025-06-22 12:06:13.291954 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-22 12:06:13.291968 | orchestrator | Sunday 22 June 2025 11:55:04 +0000 (0:00:00.674) 0:00:48.898 *********** 2025-06-22 12:06:13.291978 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 12:06:13.291988 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 12:06:13.291998 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 12:06:13.292007 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-22 12:06:13.292017 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-22 12:06:13.292026 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-22 12:06:13.292036 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-22 12:06:13.292045 | orchestrator | 2025-06-22 12:06:13.292055 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-22 12:06:13.292064 | orchestrator | Sunday 22 June 2025 11:55:05 +0000 (0:00:00.861) 0:00:49.759 *********** 2025-06-22 12:06:13.292074 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 12:06:13.292083 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 12:06:13.292093 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 12:06:13.292102 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-22 12:06:13.292112 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-22 12:06:13.292121 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-22 12:06:13.292130 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-22 12:06:13.292146 | orchestrator | 2025-06-22 12:06:13.292156 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 12:06:13.292166 | orchestrator | Sunday 22 June 2025 11:55:07 +0000 (0:00:01.824) 0:00:51.583 *********** 2025-06-22 12:06:13.292175 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.292185 | orchestrator | 2025-06-22 12:06:13.292195 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 12:06:13.292204 | orchestrator | Sunday 22 June 2025 11:55:08 +0000 (0:00:01.148) 0:00:52.731 *********** 2025-06-22 12:06:13.292214 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.292223 | orchestrator | 2025-06-22 12:06:13.292233 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 12:06:13.292242 | orchestrator | Sunday 22 June 2025 11:55:10 +0000 (0:00:01.785) 0:00:54.517 *********** 2025-06-22 12:06:13.292252 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.292261 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.292271 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.292280 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.292290 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.292299 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.292309 | orchestrator | 2025-06-22 12:06:13.292318 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 12:06:13.292328 | orchestrator | Sunday 22 June 2025 11:55:11 +0000 (0:00:01.371) 0:00:55.888 *********** 2025-06-22 12:06:13.292337 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.292347 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.292356 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.292366 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.292375 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.292385 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.292394 | orchestrator | 2025-06-22 12:06:13.292404 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 12:06:13.292413 | orchestrator | Sunday 22 June 2025 11:55:12 +0000 (0:00:01.038) 0:00:56.926 *********** 2025-06-22 12:06:13.292423 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.292432 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.292442 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.292451 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.292461 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.292471 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.292480 | orchestrator | 2025-06-22 12:06:13.292490 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 12:06:13.292499 | orchestrator | Sunday 22 June 2025 11:55:13 +0000 (0:00:01.451) 0:00:58.378 *********** 2025-06-22 12:06:13.292528 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.292538 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.292548 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.292557 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.292567 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.292577 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.292586 | orchestrator | 2025-06-22 12:06:13.292596 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 12:06:13.292605 | orchestrator | Sunday 22 June 2025 11:55:14 +0000 (0:00:00.853) 0:00:59.231 *********** 2025-06-22 12:06:13.292615 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.292624 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.292634 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.292643 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.292653 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.292663 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.292678 | orchestrator | 2025-06-22 12:06:13.292688 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 12:06:13.292708 | orchestrator | Sunday 22 June 2025 11:55:16 +0000 (0:00:01.189) 0:01:00.421 *********** 2025-06-22 12:06:13.292718 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.292728 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.292737 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.292747 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.292757 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.292766 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.292775 | orchestrator | 2025-06-22 12:06:13.292785 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 12:06:13.292795 | orchestrator | Sunday 22 June 2025 11:55:16 +0000 (0:00:00.518) 0:01:00.939 *********** 2025-06-22 12:06:13.292804 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.292814 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.292823 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.292833 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.292842 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.292852 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.292861 | orchestrator | 2025-06-22 12:06:13.292871 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 12:06:13.292880 | orchestrator | Sunday 22 June 2025 11:55:17 +0000 (0:00:00.752) 0:01:01.691 *********** 2025-06-22 12:06:13.292890 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.292919 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.292932 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.292942 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.292951 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.292960 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.292970 | orchestrator | 2025-06-22 12:06:13.292979 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 12:06:13.292989 | orchestrator | Sunday 22 June 2025 11:55:18 +0000 (0:00:01.188) 0:01:02.880 *********** 2025-06-22 12:06:13.292998 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.293008 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.293017 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.293027 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.293037 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.293046 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.293055 | orchestrator | 2025-06-22 12:06:13.293065 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 12:06:13.293075 | orchestrator | Sunday 22 June 2025 11:55:19 +0000 (0:00:01.230) 0:01:04.110 *********** 2025-06-22 12:06:13.293084 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.293094 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.293103 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.293113 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.293122 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.293131 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.293141 | orchestrator | 2025-06-22 12:06:13.293150 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 12:06:13.293160 | orchestrator | Sunday 22 June 2025 11:55:20 +0000 (0:00:01.005) 0:01:05.116 *********** 2025-06-22 12:06:13.293169 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.293179 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.293188 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.293197 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.293207 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.293216 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.293226 | orchestrator | 2025-06-22 12:06:13.293236 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 12:06:13.293245 | orchestrator | Sunday 22 June 2025 11:55:21 +0000 (0:00:01.146) 0:01:06.262 *********** 2025-06-22 12:06:13.293255 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.293270 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.293280 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.293289 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.293299 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.293309 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.293318 | orchestrator | 2025-06-22 12:06:13.293328 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 12:06:13.293337 | orchestrator | Sunday 22 June 2025 11:55:22 +0000 (0:00:00.978) 0:01:07.241 *********** 2025-06-22 12:06:13.293347 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.293356 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.293366 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.293375 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.293385 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.293394 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.293404 | orchestrator | 2025-06-22 12:06:13.293413 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 12:06:13.293423 | orchestrator | Sunday 22 June 2025 11:55:23 +0000 (0:00:01.138) 0:01:08.379 *********** 2025-06-22 12:06:13.293432 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.293442 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.293451 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.293460 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.293470 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.293479 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.293489 | orchestrator | 2025-06-22 12:06:13.293498 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 12:06:13.293508 | orchestrator | Sunday 22 June 2025 11:55:24 +0000 (0:00:00.878) 0:01:09.258 *********** 2025-06-22 12:06:13.293517 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.293527 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.293536 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.293545 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.293555 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.293565 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.293574 | orchestrator | 2025-06-22 12:06:13.293584 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 12:06:13.293593 | orchestrator | Sunday 22 June 2025 11:55:25 +0000 (0:00:01.010) 0:01:10.268 *********** 2025-06-22 12:06:13.293603 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.293612 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.293621 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.293631 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.293640 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.293650 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.293660 | orchestrator | 2025-06-22 12:06:13.293680 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 12:06:13.293690 | orchestrator | Sunday 22 June 2025 11:55:26 +0000 (0:00:00.886) 0:01:11.154 *********** 2025-06-22 12:06:13.293700 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.293709 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.293719 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.293728 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.293738 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.293748 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.293757 | orchestrator | 2025-06-22 12:06:13.293766 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 12:06:13.293776 | orchestrator | Sunday 22 June 2025 11:55:27 +0000 (0:00:01.099) 0:01:12.253 *********** 2025-06-22 12:06:13.293786 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.293795 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.293804 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.293814 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.293823 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.293838 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.293848 | orchestrator | 2025-06-22 12:06:13.293857 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 12:06:13.293867 | orchestrator | Sunday 22 June 2025 11:55:28 +0000 (0:00:00.911) 0:01:13.165 *********** 2025-06-22 12:06:13.293876 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.293886 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.293895 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.293955 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.293966 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.293976 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.293985 | orchestrator | 2025-06-22 12:06:13.293995 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-06-22 12:06:13.294005 | orchestrator | Sunday 22 June 2025 11:55:30 +0000 (0:00:01.822) 0:01:14.987 *********** 2025-06-22 12:06:13.294046 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.294058 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.294066 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.294074 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.294082 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.294089 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.294097 | orchestrator | 2025-06-22 12:06:13.294105 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-06-22 12:06:13.294113 | orchestrator | Sunday 22 June 2025 11:55:32 +0000 (0:00:02.213) 0:01:17.200 *********** 2025-06-22 12:06:13.294121 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.294129 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.294137 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.294145 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.294152 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.294160 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.294168 | orchestrator | 2025-06-22 12:06:13.294176 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-06-22 12:06:13.294184 | orchestrator | Sunday 22 June 2025 11:55:34 +0000 (0:00:02.126) 0:01:19.327 *********** 2025-06-22 12:06:13.294192 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.294200 | orchestrator | 2025-06-22 12:06:13.294208 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-06-22 12:06:13.294216 | orchestrator | Sunday 22 June 2025 11:55:36 +0000 (0:00:01.257) 0:01:20.584 *********** 2025-06-22 12:06:13.294224 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.294232 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.294239 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.294247 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.294255 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.294263 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.294270 | orchestrator | 2025-06-22 12:06:13.294278 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-06-22 12:06:13.294286 | orchestrator | Sunday 22 June 2025 11:55:36 +0000 (0:00:00.774) 0:01:21.358 *********** 2025-06-22 12:06:13.294294 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.294302 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.294310 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.294317 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.294325 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.294333 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.294341 | orchestrator | 2025-06-22 12:06:13.294348 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-06-22 12:06:13.294356 | orchestrator | Sunday 22 June 2025 11:55:37 +0000 (0:00:00.559) 0:01:21.918 *********** 2025-06-22 12:06:13.294364 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 12:06:13.294382 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 12:06:13.294390 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 12:06:13.294397 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 12:06:13.294405 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 12:06:13.294413 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 12:06:13.294421 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 12:06:13.294429 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 12:06:13.294437 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 12:06:13.294444 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 12:06:13.294453 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 12:06:13.294474 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 12:06:13.294483 | orchestrator | 2025-06-22 12:06:13.294491 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-06-22 12:06:13.294499 | orchestrator | Sunday 22 June 2025 11:55:39 +0000 (0:00:01.657) 0:01:23.576 *********** 2025-06-22 12:06:13.294507 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.294515 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.294523 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.294531 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.294539 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.294547 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.294555 | orchestrator | 2025-06-22 12:06:13.294563 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-06-22 12:06:13.294571 | orchestrator | Sunday 22 June 2025 11:55:40 +0000 (0:00:00.943) 0:01:24.519 *********** 2025-06-22 12:06:13.294579 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.294586 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.294604 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.294613 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.294621 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.294629 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.294636 | orchestrator | 2025-06-22 12:06:13.294644 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-06-22 12:06:13.294652 | orchestrator | Sunday 22 June 2025 11:55:40 +0000 (0:00:00.772) 0:01:25.291 *********** 2025-06-22 12:06:13.294660 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.294668 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.294676 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.294684 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.294692 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.294700 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.294707 | orchestrator | 2025-06-22 12:06:13.294715 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-06-22 12:06:13.294723 | orchestrator | Sunday 22 June 2025 11:55:41 +0000 (0:00:00.548) 0:01:25.840 *********** 2025-06-22 12:06:13.294731 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.294739 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.294747 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.294755 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.294763 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.294771 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.294779 | orchestrator | 2025-06-22 12:06:13.294787 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-06-22 12:06:13.294795 | orchestrator | Sunday 22 June 2025 11:55:42 +0000 (0:00:00.815) 0:01:26.655 *********** 2025-06-22 12:06:13.294812 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.294820 | orchestrator | 2025-06-22 12:06:13.294828 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-06-22 12:06:13.294836 | orchestrator | Sunday 22 June 2025 11:55:43 +0000 (0:00:01.220) 0:01:27.876 *********** 2025-06-22 12:06:13.294844 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.294852 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.294860 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.294868 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.294876 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.294884 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.294891 | orchestrator | 2025-06-22 12:06:13.294914 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-06-22 12:06:13.294930 | orchestrator | Sunday 22 June 2025 11:57:25 +0000 (0:01:41.815) 0:03:09.692 *********** 2025-06-22 12:06:13.294942 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 12:06:13.294950 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 12:06:13.294958 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 12:06:13.294966 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.294974 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 12:06:13.294982 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 12:06:13.294990 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 12:06:13.294998 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.295006 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 12:06:13.295014 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 12:06:13.295022 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 12:06:13.295030 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.295038 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 12:06:13.295046 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 12:06:13.295054 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 12:06:13.295062 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.295069 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 12:06:13.295077 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 12:06:13.295085 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 12:06:13.295093 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.295101 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 12:06:13.295114 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 12:06:13.295126 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 12:06:13.295134 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.295142 | orchestrator | 2025-06-22 12:06:13.295150 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-06-22 12:06:13.295158 | orchestrator | Sunday 22 June 2025 11:57:26 +0000 (0:00:00.826) 0:03:10.518 *********** 2025-06-22 12:06:13.295166 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.295174 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.295182 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.295189 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.295203 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.295211 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.295218 | orchestrator | 2025-06-22 12:06:13.295226 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-06-22 12:06:13.295234 | orchestrator | Sunday 22 June 2025 11:57:26 +0000 (0:00:00.499) 0:03:11.017 *********** 2025-06-22 12:06:13.295242 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.295250 | orchestrator | 2025-06-22 12:06:13.295257 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-06-22 12:06:13.295265 | orchestrator | Sunday 22 June 2025 11:57:26 +0000 (0:00:00.135) 0:03:11.153 *********** 2025-06-22 12:06:13.295273 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.295281 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.295289 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.295296 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.295304 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.295312 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.295320 | orchestrator | 2025-06-22 12:06:13.295328 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-06-22 12:06:13.295335 | orchestrator | Sunday 22 June 2025 11:57:27 +0000 (0:00:00.685) 0:03:11.839 *********** 2025-06-22 12:06:13.295343 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.295351 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.295359 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.295367 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.295374 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.295382 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.295390 | orchestrator | 2025-06-22 12:06:13.295397 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-06-22 12:06:13.295405 | orchestrator | Sunday 22 June 2025 11:57:28 +0000 (0:00:00.634) 0:03:12.474 *********** 2025-06-22 12:06:13.295413 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.295421 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.295428 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.295436 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.295444 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.295452 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.295459 | orchestrator | 2025-06-22 12:06:13.295467 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-06-22 12:06:13.295475 | orchestrator | Sunday 22 June 2025 11:57:28 +0000 (0:00:00.802) 0:03:13.277 *********** 2025-06-22 12:06:13.295483 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.295491 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.295498 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.295506 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.295514 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.295522 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.295530 | orchestrator | 2025-06-22 12:06:13.295537 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-06-22 12:06:13.295545 | orchestrator | Sunday 22 June 2025 11:57:32 +0000 (0:00:03.821) 0:03:17.099 *********** 2025-06-22 12:06:13.295553 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.295561 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.295569 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.295577 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.295584 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.295592 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.295600 | orchestrator | 2025-06-22 12:06:13.295608 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-06-22 12:06:13.295615 | orchestrator | Sunday 22 June 2025 11:57:33 +0000 (0:00:00.717) 0:03:17.816 *********** 2025-06-22 12:06:13.295623 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.295638 | orchestrator | 2025-06-22 12:06:13.295646 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-06-22 12:06:13.295654 | orchestrator | Sunday 22 June 2025 11:57:34 +0000 (0:00:01.080) 0:03:18.897 *********** 2025-06-22 12:06:13.295662 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.295669 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.295677 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.295685 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.295693 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.295701 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.295709 | orchestrator | 2025-06-22 12:06:13.295717 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-06-22 12:06:13.295724 | orchestrator | Sunday 22 June 2025 11:57:35 +0000 (0:00:00.652) 0:03:19.549 *********** 2025-06-22 12:06:13.295732 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.295740 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.295748 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.295756 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.295763 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.295771 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.295779 | orchestrator | 2025-06-22 12:06:13.295787 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-06-22 12:06:13.295794 | orchestrator | Sunday 22 June 2025 11:57:36 +0000 (0:00:00.962) 0:03:20.512 *********** 2025-06-22 12:06:13.295802 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.295810 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.295818 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.295826 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.295833 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.295845 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.295853 | orchestrator | 2025-06-22 12:06:13.295865 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-06-22 12:06:13.295873 | orchestrator | Sunday 22 June 2025 11:57:36 +0000 (0:00:00.618) 0:03:21.131 *********** 2025-06-22 12:06:13.295880 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.295888 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.295896 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.295923 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.295932 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.295940 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.295948 | orchestrator | 2025-06-22 12:06:13.295956 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-06-22 12:06:13.295963 | orchestrator | Sunday 22 June 2025 11:57:37 +0000 (0:00:00.955) 0:03:22.087 *********** 2025-06-22 12:06:13.295971 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.295979 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.295987 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.295995 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.296002 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.296010 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.296018 | orchestrator | 2025-06-22 12:06:13.296026 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-06-22 12:06:13.296034 | orchestrator | Sunday 22 June 2025 11:57:38 +0000 (0:00:00.564) 0:03:22.652 *********** 2025-06-22 12:06:13.296042 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.296049 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.296057 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.296065 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.296072 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.296080 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.296088 | orchestrator | 2025-06-22 12:06:13.296096 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-06-22 12:06:13.296104 | orchestrator | Sunday 22 June 2025 11:57:38 +0000 (0:00:00.742) 0:03:23.394 *********** 2025-06-22 12:06:13.296117 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.296125 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.296133 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.296140 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.296151 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.296164 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.296183 | orchestrator | 2025-06-22 12:06:13.296198 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-06-22 12:06:13.296210 | orchestrator | Sunday 22 June 2025 11:57:39 +0000 (0:00:00.685) 0:03:24.079 *********** 2025-06-22 12:06:13.296222 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.296234 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.296247 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.296259 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.296271 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.296283 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.296297 | orchestrator | 2025-06-22 12:06:13.296310 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-06-22 12:06:13.296323 | orchestrator | Sunday 22 June 2025 11:57:40 +0000 (0:00:00.744) 0:03:24.824 *********** 2025-06-22 12:06:13.296335 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.296343 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.296351 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.296359 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.296367 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.296374 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.296382 | orchestrator | 2025-06-22 12:06:13.296390 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-06-22 12:06:13.296398 | orchestrator | Sunday 22 June 2025 11:57:41 +0000 (0:00:01.021) 0:03:25.845 *********** 2025-06-22 12:06:13.296406 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.296414 | orchestrator | 2025-06-22 12:06:13.296422 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-06-22 12:06:13.296430 | orchestrator | Sunday 22 June 2025 11:57:42 +0000 (0:00:01.046) 0:03:26.892 *********** 2025-06-22 12:06:13.296438 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-06-22 12:06:13.296446 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-06-22 12:06:13.296454 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-06-22 12:06:13.296462 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-06-22 12:06:13.296470 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-06-22 12:06:13.296477 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-06-22 12:06:13.296485 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-06-22 12:06:13.296493 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-06-22 12:06:13.296501 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-06-22 12:06:13.296509 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-06-22 12:06:13.296516 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-06-22 12:06:13.296524 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-06-22 12:06:13.296532 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-06-22 12:06:13.296540 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-06-22 12:06:13.296548 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-06-22 12:06:13.296556 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-06-22 12:06:13.296564 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-06-22 12:06:13.296571 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-06-22 12:06:13.296579 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-06-22 12:06:13.296595 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-06-22 12:06:13.296614 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-06-22 12:06:13.296623 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-06-22 12:06:13.296631 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-06-22 12:06:13.296639 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-06-22 12:06:13.296646 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-06-22 12:06:13.296654 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-06-22 12:06:13.296662 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-06-22 12:06:13.296669 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-06-22 12:06:13.296677 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-06-22 12:06:13.296685 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-06-22 12:06:13.296693 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-06-22 12:06:13.296700 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-06-22 12:06:13.296708 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-06-22 12:06:13.296716 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-06-22 12:06:13.296723 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-06-22 12:06:13.296731 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-06-22 12:06:13.296739 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-06-22 12:06:13.296746 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-06-22 12:06:13.296754 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-06-22 12:06:13.296762 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-06-22 12:06:13.296769 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-06-22 12:06:13.296777 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-06-22 12:06:13.296785 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-06-22 12:06:13.296792 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 12:06:13.296800 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-06-22 12:06:13.296808 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 12:06:13.296816 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-06-22 12:06:13.296823 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-06-22 12:06:13.296831 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 12:06:13.296839 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-06-22 12:06:13.296846 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 12:06:13.296854 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 12:06:13.296862 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-06-22 12:06:13.296870 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 12:06:13.296877 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 12:06:13.296885 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 12:06:13.296893 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 12:06:13.296939 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 12:06:13.296948 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 12:06:13.296956 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 12:06:13.296964 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 12:06:13.296976 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 12:06:13.296984 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 12:06:13.296992 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 12:06:13.297000 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 12:06:13.297007 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 12:06:13.297015 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 12:06:13.297022 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 12:06:13.297030 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 12:06:13.297038 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 12:06:13.297045 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 12:06:13.297053 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 12:06:13.297061 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 12:06:13.297069 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 12:06:13.297076 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 12:06:13.297084 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 12:06:13.297091 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 12:06:13.297103 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 12:06:13.297115 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 12:06:13.297123 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-06-22 12:06:13.297129 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 12:06:13.297136 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-06-22 12:06:13.297142 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 12:06:13.297149 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 12:06:13.297155 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-06-22 12:06:13.297162 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-06-22 12:06:13.297169 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-06-22 12:06:13.297175 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 12:06:13.297182 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 12:06:13.297188 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-06-22 12:06:13.297194 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-06-22 12:06:13.297201 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-06-22 12:06:13.297207 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-06-22 12:06:13.297214 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-06-22 12:06:13.297220 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-06-22 12:06:13.297227 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-06-22 12:06:13.297233 | orchestrator | 2025-06-22 12:06:13.297240 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-06-22 12:06:13.297246 | orchestrator | Sunday 22 June 2025 11:57:49 +0000 (0:00:06.805) 0:03:33.697 *********** 2025-06-22 12:06:13.297253 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.297259 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.297266 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.297273 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.297283 | orchestrator | 2025-06-22 12:06:13.297290 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-06-22 12:06:13.297296 | orchestrator | Sunday 22 June 2025 11:57:50 +0000 (0:00:01.366) 0:03:35.063 *********** 2025-06-22 12:06:13.297303 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 12:06:13.297310 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 12:06:13.297317 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 12:06:13.297323 | orchestrator | 2025-06-22 12:06:13.297330 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-06-22 12:06:13.297336 | orchestrator | Sunday 22 June 2025 11:57:51 +0000 (0:00:00.774) 0:03:35.838 *********** 2025-06-22 12:06:13.297343 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 12:06:13.297350 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 12:06:13.297356 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 12:06:13.297363 | orchestrator | 2025-06-22 12:06:13.297369 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-06-22 12:06:13.297376 | orchestrator | Sunday 22 June 2025 11:57:53 +0000 (0:00:01.685) 0:03:37.524 *********** 2025-06-22 12:06:13.297382 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.297389 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.297396 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.297402 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.297409 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.297415 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.297422 | orchestrator | 2025-06-22 12:06:13.297428 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-06-22 12:06:13.297435 | orchestrator | Sunday 22 June 2025 11:57:53 +0000 (0:00:00.803) 0:03:38.328 *********** 2025-06-22 12:06:13.297441 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.297448 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.297454 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.297461 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.297467 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.297474 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.297480 | orchestrator | 2025-06-22 12:06:13.297487 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-06-22 12:06:13.297493 | orchestrator | Sunday 22 June 2025 11:57:54 +0000 (0:00:00.872) 0:03:39.200 *********** 2025-06-22 12:06:13.297500 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.297506 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.297513 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.297519 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.297526 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.297532 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.297539 | orchestrator | 2025-06-22 12:06:13.297545 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-06-22 12:06:13.297552 | orchestrator | Sunday 22 June 2025 11:57:55 +0000 (0:00:00.657) 0:03:39.858 *********** 2025-06-22 12:06:13.297565 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.297573 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.297579 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.297586 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.297592 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.297602 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.297609 | orchestrator | 2025-06-22 12:06:13.297616 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-06-22 12:06:13.297622 | orchestrator | Sunday 22 June 2025 11:57:56 +0000 (0:00:00.904) 0:03:40.762 *********** 2025-06-22 12:06:13.297629 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.297635 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.297642 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.297648 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.297655 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.297661 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.297668 | orchestrator | 2025-06-22 12:06:13.297674 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-06-22 12:06:13.297681 | orchestrator | Sunday 22 June 2025 11:57:57 +0000 (0:00:00.648) 0:03:41.411 *********** 2025-06-22 12:06:13.297687 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.297694 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.297700 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.297707 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.297714 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.297720 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.297727 | orchestrator | 2025-06-22 12:06:13.297733 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-06-22 12:06:13.297740 | orchestrator | Sunday 22 June 2025 11:57:57 +0000 (0:00:00.904) 0:03:42.316 *********** 2025-06-22 12:06:13.297746 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.297753 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.297759 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.297766 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.297772 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.297779 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.297785 | orchestrator | 2025-06-22 12:06:13.297792 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-06-22 12:06:13.297799 | orchestrator | Sunday 22 June 2025 11:57:58 +0000 (0:00:00.732) 0:03:43.048 *********** 2025-06-22 12:06:13.297805 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.297812 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.297818 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.297825 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.297831 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.297838 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.297845 | orchestrator | 2025-06-22 12:06:13.297852 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-06-22 12:06:13.297858 | orchestrator | Sunday 22 June 2025 11:57:59 +0000 (0:00:01.005) 0:03:44.053 *********** 2025-06-22 12:06:13.297865 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.297871 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.297878 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.297885 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.297891 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.297898 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.297915 | orchestrator | 2025-06-22 12:06:13.297922 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-06-22 12:06:13.297929 | orchestrator | Sunday 22 June 2025 11:58:02 +0000 (0:00:03.290) 0:03:47.343 *********** 2025-06-22 12:06:13.297936 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.297942 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.297949 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.297956 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.297962 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.297969 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.297980 | orchestrator | 2025-06-22 12:06:13.297986 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-06-22 12:06:13.297993 | orchestrator | Sunday 22 June 2025 11:58:04 +0000 (0:00:01.078) 0:03:48.421 *********** 2025-06-22 12:06:13.298000 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.298006 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.298013 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.298040 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.298047 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.298053 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.298060 | orchestrator | 2025-06-22 12:06:13.298066 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-06-22 12:06:13.298073 | orchestrator | Sunday 22 June 2025 11:58:04 +0000 (0:00:00.670) 0:03:49.092 *********** 2025-06-22 12:06:13.298080 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.298086 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.298093 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.298099 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.298106 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.298113 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.298120 | orchestrator | 2025-06-22 12:06:13.298126 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-06-22 12:06:13.298133 | orchestrator | Sunday 22 June 2025 11:58:05 +0000 (0:00:00.983) 0:03:50.075 *********** 2025-06-22 12:06:13.298140 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 12:06:13.298147 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 12:06:13.298153 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 12:06:13.298160 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.298167 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.298173 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.298180 | orchestrator | 2025-06-22 12:06:13.298199 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-06-22 12:06:13.298206 | orchestrator | Sunday 22 June 2025 11:58:06 +0000 (0:00:00.652) 0:03:50.728 *********** 2025-06-22 12:06:13.298214 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-06-22 12:06:13.298222 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-06-22 12:06:13.298230 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-06-22 12:06:13.298237 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-06-22 12:06:13.298244 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.298251 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.298258 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-06-22 12:06:13.298272 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-06-22 12:06:13.298279 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.298286 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.298292 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.298299 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.298305 | orchestrator | 2025-06-22 12:06:13.298312 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-06-22 12:06:13.298318 | orchestrator | Sunday 22 June 2025 11:58:07 +0000 (0:00:00.996) 0:03:51.724 *********** 2025-06-22 12:06:13.298325 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.298332 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.298338 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.298345 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.298351 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.298358 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.298364 | orchestrator | 2025-06-22 12:06:13.298371 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-06-22 12:06:13.298377 | orchestrator | Sunday 22 June 2025 11:58:07 +0000 (0:00:00.664) 0:03:52.389 *********** 2025-06-22 12:06:13.298384 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.298391 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.298397 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.298404 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.298410 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.298417 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.298423 | orchestrator | 2025-06-22 12:06:13.298430 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-22 12:06:13.298437 | orchestrator | Sunday 22 June 2025 11:58:08 +0000 (0:00:00.807) 0:03:53.196 *********** 2025-06-22 12:06:13.298443 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.298450 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.298457 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.298463 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.298470 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.298476 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.298483 | orchestrator | 2025-06-22 12:06:13.298490 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-22 12:06:13.298496 | orchestrator | Sunday 22 June 2025 11:58:09 +0000 (0:00:00.819) 0:03:54.015 *********** 2025-06-22 12:06:13.298503 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.298509 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.298516 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.298522 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.298529 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.298535 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.298542 | orchestrator | 2025-06-22 12:06:13.298549 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-22 12:06:13.298555 | orchestrator | Sunday 22 June 2025 11:58:10 +0000 (0:00:00.652) 0:03:54.668 *********** 2025-06-22 12:06:13.298562 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.298575 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.298582 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.298589 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.298595 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.298606 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.298612 | orchestrator | 2025-06-22 12:06:13.298619 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-22 12:06:13.298625 | orchestrator | Sunday 22 June 2025 11:58:10 +0000 (0:00:00.538) 0:03:55.206 *********** 2025-06-22 12:06:13.298632 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.298639 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.298645 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.298652 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.298658 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.298665 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.298671 | orchestrator | 2025-06-22 12:06:13.298678 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-22 12:06:13.298684 | orchestrator | Sunday 22 June 2025 11:58:11 +0000 (0:00:00.803) 0:03:56.010 *********** 2025-06-22 12:06:13.298691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 12:06:13.298698 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 12:06:13.298704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 12:06:13.298711 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.298717 | orchestrator | 2025-06-22 12:06:13.298724 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-22 12:06:13.298730 | orchestrator | Sunday 22 June 2025 11:58:11 +0000 (0:00:00.340) 0:03:56.351 *********** 2025-06-22 12:06:13.298737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 12:06:13.298743 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 12:06:13.298750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 12:06:13.298756 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.298763 | orchestrator | 2025-06-22 12:06:13.298769 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-22 12:06:13.298776 | orchestrator | Sunday 22 June 2025 11:58:12 +0000 (0:00:00.366) 0:03:56.717 *********** 2025-06-22 12:06:13.298783 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 12:06:13.298789 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 12:06:13.298796 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 12:06:13.298802 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.298809 | orchestrator | 2025-06-22 12:06:13.298815 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-22 12:06:13.298822 | orchestrator | Sunday 22 June 2025 11:58:12 +0000 (0:00:00.353) 0:03:57.071 *********** 2025-06-22 12:06:13.298829 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.298835 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.298842 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.298848 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.298855 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.298861 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.298868 | orchestrator | 2025-06-22 12:06:13.298874 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-22 12:06:13.298881 | orchestrator | Sunday 22 June 2025 11:58:13 +0000 (0:00:00.680) 0:03:57.751 *********** 2025-06-22 12:06:13.298887 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-22 12:06:13.298894 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-22 12:06:13.298911 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-06-22 12:06:13.298918 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.298925 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-06-22 12:06:13.298931 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.298938 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-22 12:06:13.298944 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-06-22 12:06:13.298951 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.298957 | orchestrator | 2025-06-22 12:06:13.298964 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-06-22 12:06:13.298975 | orchestrator | Sunday 22 June 2025 11:58:15 +0000 (0:00:01.817) 0:03:59.569 *********** 2025-06-22 12:06:13.298982 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.298988 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.298995 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.299001 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.299008 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.299014 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.299021 | orchestrator | 2025-06-22 12:06:13.299027 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 12:06:13.299034 | orchestrator | Sunday 22 June 2025 11:58:18 +0000 (0:00:03.065) 0:04:02.634 *********** 2025-06-22 12:06:13.299041 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.299047 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.299054 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.299060 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.299067 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.299073 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.299080 | orchestrator | 2025-06-22 12:06:13.299086 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-22 12:06:13.299093 | orchestrator | Sunday 22 June 2025 11:58:19 +0000 (0:00:01.107) 0:04:03.742 *********** 2025-06-22 12:06:13.299099 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.299106 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.299112 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.299119 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.299125 | orchestrator | 2025-06-22 12:06:13.299132 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-22 12:06:13.299138 | orchestrator | Sunday 22 June 2025 11:58:20 +0000 (0:00:01.220) 0:04:04.963 *********** 2025-06-22 12:06:13.299145 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.299151 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.299158 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.299165 | orchestrator | 2025-06-22 12:06:13.299178 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-22 12:06:13.299186 | orchestrator | Sunday 22 June 2025 11:58:20 +0000 (0:00:00.374) 0:04:05.337 *********** 2025-06-22 12:06:13.299192 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.299199 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.299205 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.299212 | orchestrator | 2025-06-22 12:06:13.299218 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-22 12:06:13.299225 | orchestrator | Sunday 22 June 2025 11:58:22 +0000 (0:00:01.794) 0:04:07.131 *********** 2025-06-22 12:06:13.299232 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 12:06:13.299238 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 12:06:13.299245 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 12:06:13.299251 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.299258 | orchestrator | 2025-06-22 12:06:13.299265 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-22 12:06:13.299271 | orchestrator | Sunday 22 June 2025 11:58:23 +0000 (0:00:00.797) 0:04:07.929 *********** 2025-06-22 12:06:13.299278 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.299284 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.299291 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.299297 | orchestrator | 2025-06-22 12:06:13.299304 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-22 12:06:13.299311 | orchestrator | Sunday 22 June 2025 11:58:23 +0000 (0:00:00.459) 0:04:08.388 *********** 2025-06-22 12:06:13.299317 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.299324 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.299334 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.299341 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.299348 | orchestrator | 2025-06-22 12:06:13.299354 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-22 12:06:13.299361 | orchestrator | Sunday 22 June 2025 11:58:25 +0000 (0:00:01.023) 0:04:09.412 *********** 2025-06-22 12:06:13.299367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 12:06:13.299374 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 12:06:13.299380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 12:06:13.299387 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.299393 | orchestrator | 2025-06-22 12:06:13.299400 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-22 12:06:13.299407 | orchestrator | Sunday 22 June 2025 11:58:25 +0000 (0:00:00.325) 0:04:09.737 *********** 2025-06-22 12:06:13.299413 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.299420 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.299426 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.299433 | orchestrator | 2025-06-22 12:06:13.299439 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-22 12:06:13.299446 | orchestrator | Sunday 22 June 2025 11:58:25 +0000 (0:00:00.320) 0:04:10.058 *********** 2025-06-22 12:06:13.299452 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.299459 | orchestrator | 2025-06-22 12:06:13.299466 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-22 12:06:13.299472 | orchestrator | Sunday 22 June 2025 11:58:25 +0000 (0:00:00.199) 0:04:10.258 *********** 2025-06-22 12:06:13.299479 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.299485 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.299492 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.299498 | orchestrator | 2025-06-22 12:06:13.299505 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-22 12:06:13.299511 | orchestrator | Sunday 22 June 2025 11:58:26 +0000 (0:00:00.342) 0:04:10.600 *********** 2025-06-22 12:06:13.299518 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.299524 | orchestrator | 2025-06-22 12:06:13.299531 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-22 12:06:13.299537 | orchestrator | Sunday 22 June 2025 11:58:26 +0000 (0:00:00.202) 0:04:10.802 *********** 2025-06-22 12:06:13.299544 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.299550 | orchestrator | 2025-06-22 12:06:13.299557 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-22 12:06:13.299564 | orchestrator | Sunday 22 June 2025 11:58:26 +0000 (0:00:00.230) 0:04:11.033 *********** 2025-06-22 12:06:13.299570 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.299577 | orchestrator | 2025-06-22 12:06:13.299583 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-22 12:06:13.299590 | orchestrator | Sunday 22 June 2025 11:58:26 +0000 (0:00:00.286) 0:04:11.319 *********** 2025-06-22 12:06:13.299597 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.299603 | orchestrator | 2025-06-22 12:06:13.299610 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-22 12:06:13.299617 | orchestrator | Sunday 22 June 2025 11:58:27 +0000 (0:00:00.199) 0:04:11.519 *********** 2025-06-22 12:06:13.299623 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.299630 | orchestrator | 2025-06-22 12:06:13.299636 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-22 12:06:13.299643 | orchestrator | Sunday 22 June 2025 11:58:27 +0000 (0:00:00.203) 0:04:11.722 *********** 2025-06-22 12:06:13.299649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 12:06:13.299656 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 12:06:13.299662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 12:06:13.299673 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.299679 | orchestrator | 2025-06-22 12:06:13.299686 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-22 12:06:13.299693 | orchestrator | Sunday 22 June 2025 11:58:27 +0000 (0:00:00.419) 0:04:12.142 *********** 2025-06-22 12:06:13.299699 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.299709 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.299718 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.299725 | orchestrator | 2025-06-22 12:06:13.299732 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-22 12:06:13.299738 | orchestrator | Sunday 22 June 2025 11:58:28 +0000 (0:00:00.341) 0:04:12.484 *********** 2025-06-22 12:06:13.299745 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.299751 | orchestrator | 2025-06-22 12:06:13.299758 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-22 12:06:13.299764 | orchestrator | Sunday 22 June 2025 11:58:28 +0000 (0:00:00.189) 0:04:12.674 *********** 2025-06-22 12:06:13.299771 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.299777 | orchestrator | 2025-06-22 12:06:13.299784 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-22 12:06:13.299790 | orchestrator | Sunday 22 June 2025 11:58:28 +0000 (0:00:00.183) 0:04:12.858 *********** 2025-06-22 12:06:13.299797 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.299803 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.299810 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.299816 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.299823 | orchestrator | 2025-06-22 12:06:13.299830 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-22 12:06:13.299836 | orchestrator | Sunday 22 June 2025 11:58:29 +0000 (0:00:01.043) 0:04:13.902 *********** 2025-06-22 12:06:13.299842 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.299849 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.299856 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.299862 | orchestrator | 2025-06-22 12:06:13.299869 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-22 12:06:13.299875 | orchestrator | Sunday 22 June 2025 11:58:29 +0000 (0:00:00.301) 0:04:14.203 *********** 2025-06-22 12:06:13.299882 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.299888 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.299895 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.299926 | orchestrator | 2025-06-22 12:06:13.299934 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-22 12:06:13.299941 | orchestrator | Sunday 22 June 2025 11:58:30 +0000 (0:00:01.153) 0:04:15.357 *********** 2025-06-22 12:06:13.299947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 12:06:13.299954 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 12:06:13.299961 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 12:06:13.299967 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.299974 | orchestrator | 2025-06-22 12:06:13.299980 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-22 12:06:13.299987 | orchestrator | Sunday 22 June 2025 11:58:31 +0000 (0:00:00.913) 0:04:16.270 *********** 2025-06-22 12:06:13.299993 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.300000 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.300006 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.300013 | orchestrator | 2025-06-22 12:06:13.300020 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-22 12:06:13.300026 | orchestrator | Sunday 22 June 2025 11:58:32 +0000 (0:00:00.309) 0:04:16.579 *********** 2025-06-22 12:06:13.300033 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.300039 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.300050 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.300057 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.300063 | orchestrator | 2025-06-22 12:06:13.300070 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-22 12:06:13.300077 | orchestrator | Sunday 22 June 2025 11:58:33 +0000 (0:00:01.276) 0:04:17.856 *********** 2025-06-22 12:06:13.300083 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.300090 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.300096 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.300103 | orchestrator | 2025-06-22 12:06:13.300109 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-22 12:06:13.300116 | orchestrator | Sunday 22 June 2025 11:58:33 +0000 (0:00:00.391) 0:04:18.247 *********** 2025-06-22 12:06:13.300122 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.300129 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.300136 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.300142 | orchestrator | 2025-06-22 12:06:13.300149 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-22 12:06:13.300155 | orchestrator | Sunday 22 June 2025 11:58:35 +0000 (0:00:01.509) 0:04:19.756 *********** 2025-06-22 12:06:13.300162 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 12:06:13.300168 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 12:06:13.300175 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 12:06:13.300182 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.300188 | orchestrator | 2025-06-22 12:06:13.300195 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-22 12:06:13.300201 | orchestrator | Sunday 22 June 2025 11:58:36 +0000 (0:00:00.882) 0:04:20.638 *********** 2025-06-22 12:06:13.300208 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.300214 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.300221 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.300228 | orchestrator | 2025-06-22 12:06:13.300234 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-06-22 12:06:13.300241 | orchestrator | Sunday 22 June 2025 11:58:36 +0000 (0:00:00.297) 0:04:20.936 *********** 2025-06-22 12:06:13.300247 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.300254 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.300260 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.300267 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.300273 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.300280 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.300286 | orchestrator | 2025-06-22 12:06:13.300293 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-22 12:06:13.300307 | orchestrator | Sunday 22 June 2025 11:58:37 +0000 (0:00:00.799) 0:04:21.735 *********** 2025-06-22 12:06:13.300314 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.300321 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.300327 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.300334 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.300341 | orchestrator | 2025-06-22 12:06:13.300347 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-22 12:06:13.300354 | orchestrator | Sunday 22 June 2025 11:58:38 +0000 (0:00:01.089) 0:04:22.825 *********** 2025-06-22 12:06:13.300361 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.300368 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.300374 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.300381 | orchestrator | 2025-06-22 12:06:13.300388 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-22 12:06:13.300395 | orchestrator | Sunday 22 June 2025 11:58:38 +0000 (0:00:00.274) 0:04:23.100 *********** 2025-06-22 12:06:13.300405 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.300412 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.300418 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.300425 | orchestrator | 2025-06-22 12:06:13.300432 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-22 12:06:13.300438 | orchestrator | Sunday 22 June 2025 11:58:39 +0000 (0:00:01.242) 0:04:24.342 *********** 2025-06-22 12:06:13.300445 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 12:06:13.300452 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 12:06:13.300459 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 12:06:13.300465 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.300472 | orchestrator | 2025-06-22 12:06:13.300478 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-22 12:06:13.300484 | orchestrator | Sunday 22 June 2025 11:58:40 +0000 (0:00:00.792) 0:04:25.135 *********** 2025-06-22 12:06:13.300490 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.300496 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.300502 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.300509 | orchestrator | 2025-06-22 12:06:13.300515 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-06-22 12:06:13.300521 | orchestrator | 2025-06-22 12:06:13.300527 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 12:06:13.300533 | orchestrator | Sunday 22 June 2025 11:58:41 +0000 (0:00:00.756) 0:04:25.891 *********** 2025-06-22 12:06:13.300540 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.300546 | orchestrator | 2025-06-22 12:06:13.300552 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 12:06:13.300558 | orchestrator | Sunday 22 June 2025 11:58:41 +0000 (0:00:00.404) 0:04:26.295 *********** 2025-06-22 12:06:13.300564 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.300571 | orchestrator | 2025-06-22 12:06:13.300577 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 12:06:13.300583 | orchestrator | Sunday 22 June 2025 11:58:42 +0000 (0:00:00.628) 0:04:26.924 *********** 2025-06-22 12:06:13.300589 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.300595 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.300602 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.300608 | orchestrator | 2025-06-22 12:06:13.300614 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 12:06:13.300620 | orchestrator | Sunday 22 June 2025 11:58:43 +0000 (0:00:00.708) 0:04:27.632 *********** 2025-06-22 12:06:13.300626 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.300632 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.300638 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.300645 | orchestrator | 2025-06-22 12:06:13.300651 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 12:06:13.300657 | orchestrator | Sunday 22 June 2025 11:58:43 +0000 (0:00:00.298) 0:04:27.930 *********** 2025-06-22 12:06:13.300663 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.300669 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.300676 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.300682 | orchestrator | 2025-06-22 12:06:13.300688 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 12:06:13.300694 | orchestrator | Sunday 22 June 2025 11:58:43 +0000 (0:00:00.271) 0:04:28.202 *********** 2025-06-22 12:06:13.300700 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.300706 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.300712 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.300719 | orchestrator | 2025-06-22 12:06:13.300725 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 12:06:13.300735 | orchestrator | Sunday 22 June 2025 11:58:44 +0000 (0:00:00.533) 0:04:28.736 *********** 2025-06-22 12:06:13.300741 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.300747 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.300753 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.300760 | orchestrator | 2025-06-22 12:06:13.300766 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 12:06:13.300772 | orchestrator | Sunday 22 June 2025 11:58:45 +0000 (0:00:00.708) 0:04:29.444 *********** 2025-06-22 12:06:13.300778 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.300784 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.300791 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.300797 | orchestrator | 2025-06-22 12:06:13.300803 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 12:06:13.300809 | orchestrator | Sunday 22 June 2025 11:58:45 +0000 (0:00:00.322) 0:04:29.767 *********** 2025-06-22 12:06:13.300815 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.300821 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.300828 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.300834 | orchestrator | 2025-06-22 12:06:13.300845 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 12:06:13.300852 | orchestrator | Sunday 22 June 2025 11:58:45 +0000 (0:00:00.358) 0:04:30.125 *********** 2025-06-22 12:06:13.300858 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.300865 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.300871 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.300877 | orchestrator | 2025-06-22 12:06:13.300883 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 12:06:13.300889 | orchestrator | Sunday 22 June 2025 11:58:46 +0000 (0:00:01.074) 0:04:31.200 *********** 2025-06-22 12:06:13.300896 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.300912 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.300918 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.300925 | orchestrator | 2025-06-22 12:06:13.300932 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 12:06:13.300938 | orchestrator | Sunday 22 June 2025 11:58:47 +0000 (0:00:00.817) 0:04:32.017 *********** 2025-06-22 12:06:13.300944 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.300950 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.300957 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.300963 | orchestrator | 2025-06-22 12:06:13.300969 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 12:06:13.300975 | orchestrator | Sunday 22 June 2025 11:58:47 +0000 (0:00:00.271) 0:04:32.289 *********** 2025-06-22 12:06:13.300981 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.300988 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.300994 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.301000 | orchestrator | 2025-06-22 12:06:13.301006 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 12:06:13.301013 | orchestrator | Sunday 22 June 2025 11:58:48 +0000 (0:00:00.314) 0:04:32.603 *********** 2025-06-22 12:06:13.301019 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.301025 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.301031 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.301037 | orchestrator | 2025-06-22 12:06:13.301043 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 12:06:13.301049 | orchestrator | Sunday 22 June 2025 11:58:48 +0000 (0:00:00.464) 0:04:33.068 *********** 2025-06-22 12:06:13.301056 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.301062 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.301068 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.301074 | orchestrator | 2025-06-22 12:06:13.301081 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 12:06:13.301087 | orchestrator | Sunday 22 June 2025 11:58:48 +0000 (0:00:00.318) 0:04:33.387 *********** 2025-06-22 12:06:13.301096 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.301103 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.301109 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.301115 | orchestrator | 2025-06-22 12:06:13.301121 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 12:06:13.301128 | orchestrator | Sunday 22 June 2025 11:58:49 +0000 (0:00:00.266) 0:04:33.653 *********** 2025-06-22 12:06:13.301134 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.301140 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.301146 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.301152 | orchestrator | 2025-06-22 12:06:13.301158 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 12:06:13.301165 | orchestrator | Sunday 22 June 2025 11:58:49 +0000 (0:00:00.246) 0:04:33.899 *********** 2025-06-22 12:06:13.301171 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.301177 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.301183 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.301189 | orchestrator | 2025-06-22 12:06:13.301195 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 12:06:13.301202 | orchestrator | Sunday 22 June 2025 11:58:49 +0000 (0:00:00.390) 0:04:34.290 *********** 2025-06-22 12:06:13.301208 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.301214 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.301220 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.301226 | orchestrator | 2025-06-22 12:06:13.301232 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 12:06:13.301239 | orchestrator | Sunday 22 June 2025 11:58:50 +0000 (0:00:00.237) 0:04:34.528 *********** 2025-06-22 12:06:13.301245 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.301251 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.301257 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.301263 | orchestrator | 2025-06-22 12:06:13.301269 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 12:06:13.301276 | orchestrator | Sunday 22 June 2025 11:58:50 +0000 (0:00:00.285) 0:04:34.813 *********** 2025-06-22 12:06:13.301282 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.301288 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.301294 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.301300 | orchestrator | 2025-06-22 12:06:13.301306 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-06-22 12:06:13.301312 | orchestrator | Sunday 22 June 2025 11:58:51 +0000 (0:00:00.640) 0:04:35.454 *********** 2025-06-22 12:06:13.301318 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.301325 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.301331 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.301337 | orchestrator | 2025-06-22 12:06:13.301343 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-06-22 12:06:13.301349 | orchestrator | Sunday 22 June 2025 11:58:51 +0000 (0:00:00.262) 0:04:35.717 *********** 2025-06-22 12:06:13.301355 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-06-22 12:06:13.301361 | orchestrator | 2025-06-22 12:06:13.301367 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-06-22 12:06:13.301373 | orchestrator | Sunday 22 June 2025 11:58:51 +0000 (0:00:00.494) 0:04:36.212 *********** 2025-06-22 12:06:13.301379 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.301386 | orchestrator | 2025-06-22 12:06:13.301392 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-06-22 12:06:13.301401 | orchestrator | Sunday 22 June 2025 11:58:51 +0000 (0:00:00.106) 0:04:36.318 *********** 2025-06-22 12:06:13.301411 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 12:06:13.301417 | orchestrator | 2025-06-22 12:06:13.301423 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-06-22 12:06:13.301429 | orchestrator | Sunday 22 June 2025 11:58:53 +0000 (0:00:01.247) 0:04:37.565 *********** 2025-06-22 12:06:13.301441 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.301447 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.301454 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.301460 | orchestrator | 2025-06-22 12:06:13.301466 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-06-22 12:06:13.301472 | orchestrator | Sunday 22 June 2025 11:58:53 +0000 (0:00:00.446) 0:04:38.012 *********** 2025-06-22 12:06:13.301478 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.301485 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.301491 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.301497 | orchestrator | 2025-06-22 12:06:13.301503 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-06-22 12:06:13.301509 | orchestrator | Sunday 22 June 2025 11:58:54 +0000 (0:00:00.474) 0:04:38.487 *********** 2025-06-22 12:06:13.301516 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.301522 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.301528 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.301534 | orchestrator | 2025-06-22 12:06:13.301540 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-06-22 12:06:13.301546 | orchestrator | Sunday 22 June 2025 11:58:55 +0000 (0:00:01.488) 0:04:39.976 *********** 2025-06-22 12:06:13.301553 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.301559 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.301565 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.301571 | orchestrator | 2025-06-22 12:06:13.301577 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-06-22 12:06:13.301584 | orchestrator | Sunday 22 June 2025 11:58:56 +0000 (0:00:01.143) 0:04:41.119 *********** 2025-06-22 12:06:13.301590 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.301596 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.301602 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.301608 | orchestrator | 2025-06-22 12:06:13.301614 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-06-22 12:06:13.301621 | orchestrator | Sunday 22 June 2025 11:58:57 +0000 (0:00:00.772) 0:04:41.892 *********** 2025-06-22 12:06:13.301627 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.301633 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.301639 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.301646 | orchestrator | 2025-06-22 12:06:13.301652 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-06-22 12:06:13.301658 | orchestrator | Sunday 22 June 2025 11:58:58 +0000 (0:00:00.795) 0:04:42.687 *********** 2025-06-22 12:06:13.301664 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.301670 | orchestrator | 2025-06-22 12:06:13.301676 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-06-22 12:06:13.301683 | orchestrator | Sunday 22 June 2025 11:58:59 +0000 (0:00:01.239) 0:04:43.927 *********** 2025-06-22 12:06:13.301689 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.301695 | orchestrator | 2025-06-22 12:06:13.301701 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-06-22 12:06:13.301707 | orchestrator | Sunday 22 June 2025 11:59:00 +0000 (0:00:00.684) 0:04:44.612 *********** 2025-06-22 12:06:13.301713 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 12:06:13.301720 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:06:13.301726 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:06:13.301732 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 12:06:13.301738 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-06-22 12:06:13.301744 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 12:06:13.301751 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 12:06:13.301757 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-06-22 12:06:13.301766 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 12:06:13.301773 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-06-22 12:06:13.301779 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-06-22 12:06:13.301785 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-06-22 12:06:13.301791 | orchestrator | 2025-06-22 12:06:13.301797 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-06-22 12:06:13.301804 | orchestrator | Sunday 22 June 2025 11:59:03 +0000 (0:00:03.340) 0:04:47.952 *********** 2025-06-22 12:06:13.301810 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.301816 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.301822 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.301828 | orchestrator | 2025-06-22 12:06:13.301834 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-06-22 12:06:13.301841 | orchestrator | Sunday 22 June 2025 11:59:05 +0000 (0:00:01.626) 0:04:49.579 *********** 2025-06-22 12:06:13.301847 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.301853 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.301859 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.301865 | orchestrator | 2025-06-22 12:06:13.301872 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-06-22 12:06:13.301878 | orchestrator | Sunday 22 June 2025 11:59:05 +0000 (0:00:00.419) 0:04:49.998 *********** 2025-06-22 12:06:13.301884 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.301890 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.301896 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.301913 | orchestrator | 2025-06-22 12:06:13.301920 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-06-22 12:06:13.301926 | orchestrator | Sunday 22 June 2025 11:59:05 +0000 (0:00:00.346) 0:04:50.345 *********** 2025-06-22 12:06:13.301932 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.301938 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.301944 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.301951 | orchestrator | 2025-06-22 12:06:13.301963 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-06-22 12:06:13.301969 | orchestrator | Sunday 22 June 2025 11:59:07 +0000 (0:00:01.984) 0:04:52.329 *********** 2025-06-22 12:06:13.301976 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.301982 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.301988 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.301994 | orchestrator | 2025-06-22 12:06:13.302000 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-06-22 12:06:13.302006 | orchestrator | Sunday 22 June 2025 11:59:09 +0000 (0:00:01.576) 0:04:53.906 *********** 2025-06-22 12:06:13.302013 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.302083 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.302090 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.302096 | orchestrator | 2025-06-22 12:06:13.302103 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-06-22 12:06:13.302109 | orchestrator | Sunday 22 June 2025 11:59:09 +0000 (0:00:00.346) 0:04:54.253 *********** 2025-06-22 12:06:13.302115 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.302121 | orchestrator | 2025-06-22 12:06:13.302128 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-06-22 12:06:13.302134 | orchestrator | Sunday 22 June 2025 11:59:10 +0000 (0:00:00.554) 0:04:54.807 *********** 2025-06-22 12:06:13.302140 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.302146 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.302152 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.302158 | orchestrator | 2025-06-22 12:06:13.302165 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-06-22 12:06:13.302171 | orchestrator | Sunday 22 June 2025 11:59:10 +0000 (0:00:00.522) 0:04:55.330 *********** 2025-06-22 12:06:13.302181 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.302188 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.302194 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.302200 | orchestrator | 2025-06-22 12:06:13.302206 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-06-22 12:06:13.302212 | orchestrator | Sunday 22 June 2025 11:59:11 +0000 (0:00:00.320) 0:04:55.651 *********** 2025-06-22 12:06:13.302219 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.302225 | orchestrator | 2025-06-22 12:06:13.302231 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-06-22 12:06:13.302237 | orchestrator | Sunday 22 June 2025 11:59:11 +0000 (0:00:00.523) 0:04:56.175 *********** 2025-06-22 12:06:13.302243 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.302249 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.302256 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.302262 | orchestrator | 2025-06-22 12:06:13.302268 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-06-22 12:06:13.302274 | orchestrator | Sunday 22 June 2025 11:59:13 +0000 (0:00:01.936) 0:04:58.112 *********** 2025-06-22 12:06:13.302280 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.302286 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.302293 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.302299 | orchestrator | 2025-06-22 12:06:13.302305 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-06-22 12:06:13.302311 | orchestrator | Sunday 22 June 2025 11:59:14 +0000 (0:00:01.241) 0:04:59.354 *********** 2025-06-22 12:06:13.302317 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.302323 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.302329 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.302336 | orchestrator | 2025-06-22 12:06:13.302342 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-06-22 12:06:13.302348 | orchestrator | Sunday 22 June 2025 11:59:16 +0000 (0:00:01.861) 0:05:01.215 *********** 2025-06-22 12:06:13.302354 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.302360 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.302366 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.302373 | orchestrator | 2025-06-22 12:06:13.302379 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-06-22 12:06:13.302385 | orchestrator | Sunday 22 June 2025 11:59:19 +0000 (0:00:02.588) 0:05:03.803 *********** 2025-06-22 12:06:13.302391 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.302397 | orchestrator | 2025-06-22 12:06:13.302403 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-06-22 12:06:13.302409 | orchestrator | Sunday 22 June 2025 11:59:20 +0000 (0:00:00.904) 0:05:04.707 *********** 2025-06-22 12:06:13.302415 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-06-22 12:06:13.302421 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.302427 | orchestrator | 2025-06-22 12:06:13.302433 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-06-22 12:06:13.302439 | orchestrator | Sunday 22 June 2025 11:59:42 +0000 (0:00:21.905) 0:05:26.613 *********** 2025-06-22 12:06:13.302445 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.302452 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.302458 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.302464 | orchestrator | 2025-06-22 12:06:13.302470 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-06-22 12:06:13.302476 | orchestrator | Sunday 22 June 2025 11:59:53 +0000 (0:00:10.956) 0:05:37.570 *********** 2025-06-22 12:06:13.302482 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.302488 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.302494 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.302505 | orchestrator | 2025-06-22 12:06:13.302512 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-06-22 12:06:13.302518 | orchestrator | Sunday 22 June 2025 11:59:53 +0000 (0:00:00.430) 0:05:38.001 *********** 2025-06-22 12:06:13.302548 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c5efada8e25e8341f140a023811868aa71ee6aed'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-06-22 12:06:13.302557 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c5efada8e25e8341f140a023811868aa71ee6aed'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-06-22 12:06:13.302564 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c5efada8e25e8341f140a023811868aa71ee6aed'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-06-22 12:06:13.302571 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c5efada8e25e8341f140a023811868aa71ee6aed'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-06-22 12:06:13.302578 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c5efada8e25e8341f140a023811868aa71ee6aed'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-06-22 12:06:13.302584 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c5efada8e25e8341f140a023811868aa71ee6aed'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__c5efada8e25e8341f140a023811868aa71ee6aed'}])  2025-06-22 12:06:13.302591 | orchestrator | 2025-06-22 12:06:13.302597 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 12:06:13.302603 | orchestrator | Sunday 22 June 2025 12:00:08 +0000 (0:00:15.198) 0:05:53.199 *********** 2025-06-22 12:06:13.302609 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.302615 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.302621 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.302627 | orchestrator | 2025-06-22 12:06:13.302633 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-22 12:06:13.302639 | orchestrator | Sunday 22 June 2025 12:00:09 +0000 (0:00:00.342) 0:05:53.542 *********** 2025-06-22 12:06:13.302645 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.302651 | orchestrator | 2025-06-22 12:06:13.302658 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-22 12:06:13.302664 | orchestrator | Sunday 22 June 2025 12:00:09 +0000 (0:00:00.606) 0:05:54.148 *********** 2025-06-22 12:06:13.302670 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.302676 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.302682 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.302692 | orchestrator | 2025-06-22 12:06:13.302698 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-22 12:06:13.302704 | orchestrator | Sunday 22 June 2025 12:00:10 +0000 (0:00:00.284) 0:05:54.433 *********** 2025-06-22 12:06:13.302710 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.302716 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.302722 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.302728 | orchestrator | 2025-06-22 12:06:13.302734 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-22 12:06:13.302740 | orchestrator | Sunday 22 June 2025 12:00:10 +0000 (0:00:00.291) 0:05:54.725 *********** 2025-06-22 12:06:13.302746 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 12:06:13.302752 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 12:06:13.302758 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 12:06:13.302764 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.302771 | orchestrator | 2025-06-22 12:06:13.302777 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-22 12:06:13.302783 | orchestrator | Sunday 22 June 2025 12:00:11 +0000 (0:00:01.002) 0:05:55.728 *********** 2025-06-22 12:06:13.302789 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.302795 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.302801 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.302807 | orchestrator | 2025-06-22 12:06:13.302831 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-06-22 12:06:13.302839 | orchestrator | 2025-06-22 12:06:13.302848 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 12:06:13.302854 | orchestrator | Sunday 22 June 2025 12:00:12 +0000 (0:00:00.892) 0:05:56.621 *********** 2025-06-22 12:06:13.302860 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.302866 | orchestrator | 2025-06-22 12:06:13.302873 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 12:06:13.302879 | orchestrator | Sunday 22 June 2025 12:00:12 +0000 (0:00:00.522) 0:05:57.143 *********** 2025-06-22 12:06:13.302885 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.302891 | orchestrator | 2025-06-22 12:06:13.302897 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 12:06:13.302912 | orchestrator | Sunday 22 June 2025 12:00:13 +0000 (0:00:00.857) 0:05:58.000 *********** 2025-06-22 12:06:13.302919 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.302925 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.302931 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.302937 | orchestrator | 2025-06-22 12:06:13.302943 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 12:06:13.302949 | orchestrator | Sunday 22 June 2025 12:00:14 +0000 (0:00:00.769) 0:05:58.770 *********** 2025-06-22 12:06:13.302955 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.302961 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.302967 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.302973 | orchestrator | 2025-06-22 12:06:13.302979 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 12:06:13.302986 | orchestrator | Sunday 22 June 2025 12:00:14 +0000 (0:00:00.384) 0:05:59.154 *********** 2025-06-22 12:06:13.302992 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.302998 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.303004 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.303010 | orchestrator | 2025-06-22 12:06:13.303016 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 12:06:13.303022 | orchestrator | Sunday 22 June 2025 12:00:15 +0000 (0:00:00.687) 0:05:59.842 *********** 2025-06-22 12:06:13.303028 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.303038 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.303044 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.303050 | orchestrator | 2025-06-22 12:06:13.303057 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 12:06:13.303063 | orchestrator | Sunday 22 June 2025 12:00:15 +0000 (0:00:00.354) 0:06:00.196 *********** 2025-06-22 12:06:13.303069 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.303075 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.303081 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.303087 | orchestrator | 2025-06-22 12:06:13.303093 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 12:06:13.303099 | orchestrator | Sunday 22 June 2025 12:00:16 +0000 (0:00:00.836) 0:06:01.033 *********** 2025-06-22 12:06:13.303105 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.303111 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.303117 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.303123 | orchestrator | 2025-06-22 12:06:13.303129 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 12:06:13.303135 | orchestrator | Sunday 22 June 2025 12:00:16 +0000 (0:00:00.338) 0:06:01.372 *********** 2025-06-22 12:06:13.303141 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.303147 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.303153 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.303159 | orchestrator | 2025-06-22 12:06:13.303165 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 12:06:13.303172 | orchestrator | Sunday 22 June 2025 12:00:17 +0000 (0:00:00.611) 0:06:01.983 *********** 2025-06-22 12:06:13.303178 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.303184 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.303190 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.303196 | orchestrator | 2025-06-22 12:06:13.303202 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 12:06:13.303208 | orchestrator | Sunday 22 June 2025 12:00:18 +0000 (0:00:00.734) 0:06:02.718 *********** 2025-06-22 12:06:13.303214 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.303220 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.303226 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.303232 | orchestrator | 2025-06-22 12:06:13.303238 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 12:06:13.303244 | orchestrator | Sunday 22 June 2025 12:00:19 +0000 (0:00:00.782) 0:06:03.501 *********** 2025-06-22 12:06:13.303250 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.303257 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.303263 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.303269 | orchestrator | 2025-06-22 12:06:13.303275 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 12:06:13.303281 | orchestrator | Sunday 22 June 2025 12:00:19 +0000 (0:00:00.314) 0:06:03.816 *********** 2025-06-22 12:06:13.303287 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.303293 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.303299 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.303305 | orchestrator | 2025-06-22 12:06:13.303311 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 12:06:13.303317 | orchestrator | Sunday 22 June 2025 12:00:20 +0000 (0:00:00.593) 0:06:04.409 *********** 2025-06-22 12:06:13.303323 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.303330 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.303336 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.303342 | orchestrator | 2025-06-22 12:06:13.303348 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 12:06:13.303354 | orchestrator | Sunday 22 June 2025 12:00:20 +0000 (0:00:00.321) 0:06:04.730 *********** 2025-06-22 12:06:13.303360 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.303366 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.303390 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.303401 | orchestrator | 2025-06-22 12:06:13.303410 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 12:06:13.303417 | orchestrator | Sunday 22 June 2025 12:00:20 +0000 (0:00:00.324) 0:06:05.055 *********** 2025-06-22 12:06:13.303423 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.303429 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.303435 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.303441 | orchestrator | 2025-06-22 12:06:13.303447 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 12:06:13.303453 | orchestrator | Sunday 22 June 2025 12:00:20 +0000 (0:00:00.308) 0:06:05.363 *********** 2025-06-22 12:06:13.303459 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.303465 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.303471 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.303477 | orchestrator | 2025-06-22 12:06:13.303483 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 12:06:13.303489 | orchestrator | Sunday 22 June 2025 12:00:21 +0000 (0:00:00.562) 0:06:05.926 *********** 2025-06-22 12:06:13.303495 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.303502 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.303508 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.303514 | orchestrator | 2025-06-22 12:06:13.303519 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 12:06:13.303526 | orchestrator | Sunday 22 June 2025 12:00:21 +0000 (0:00:00.302) 0:06:06.228 *********** 2025-06-22 12:06:13.303532 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.303538 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.303544 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.303550 | orchestrator | 2025-06-22 12:06:13.303556 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 12:06:13.303562 | orchestrator | Sunday 22 June 2025 12:00:22 +0000 (0:00:00.342) 0:06:06.571 *********** 2025-06-22 12:06:13.303568 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.303574 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.303580 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.303586 | orchestrator | 2025-06-22 12:06:13.303592 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 12:06:13.303598 | orchestrator | Sunday 22 June 2025 12:00:22 +0000 (0:00:00.346) 0:06:06.918 *********** 2025-06-22 12:06:13.303604 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.303611 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.303616 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.303622 | orchestrator | 2025-06-22 12:06:13.303629 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-06-22 12:06:13.303635 | orchestrator | Sunday 22 June 2025 12:00:23 +0000 (0:00:00.833) 0:06:07.751 *********** 2025-06-22 12:06:13.303641 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 12:06:13.303647 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 12:06:13.303653 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 12:06:13.303659 | orchestrator | 2025-06-22 12:06:13.303665 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-06-22 12:06:13.303671 | orchestrator | Sunday 22 June 2025 12:00:24 +0000 (0:00:00.695) 0:06:08.447 *********** 2025-06-22 12:06:13.303677 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.303683 | orchestrator | 2025-06-22 12:06:13.303690 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-06-22 12:06:13.303696 | orchestrator | Sunday 22 June 2025 12:00:24 +0000 (0:00:00.524) 0:06:08.972 *********** 2025-06-22 12:06:13.303702 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.303708 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.303714 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.303723 | orchestrator | 2025-06-22 12:06:13.303729 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-06-22 12:06:13.303735 | orchestrator | Sunday 22 June 2025 12:00:25 +0000 (0:00:01.089) 0:06:10.061 *********** 2025-06-22 12:06:13.303741 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.303747 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.303753 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.303759 | orchestrator | 2025-06-22 12:06:13.303765 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-06-22 12:06:13.303771 | orchestrator | Sunday 22 June 2025 12:00:26 +0000 (0:00:00.375) 0:06:10.437 *********** 2025-06-22 12:06:13.303778 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 12:06:13.303784 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 12:06:13.303790 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 12:06:13.303795 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-06-22 12:06:13.303801 | orchestrator | 2025-06-22 12:06:13.303808 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-06-22 12:06:13.303814 | orchestrator | Sunday 22 June 2025 12:00:36 +0000 (0:00:10.238) 0:06:20.676 *********** 2025-06-22 12:06:13.303820 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.303826 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.303832 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.303838 | orchestrator | 2025-06-22 12:06:13.303844 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-06-22 12:06:13.303850 | orchestrator | Sunday 22 June 2025 12:00:36 +0000 (0:00:00.405) 0:06:21.081 *********** 2025-06-22 12:06:13.303856 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-22 12:06:13.303862 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-22 12:06:13.303868 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-22 12:06:13.303874 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-22 12:06:13.303880 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:06:13.303886 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:06:13.303892 | orchestrator | 2025-06-22 12:06:13.303944 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-06-22 12:06:13.303956 | orchestrator | Sunday 22 June 2025 12:00:39 +0000 (0:00:02.583) 0:06:23.665 *********** 2025-06-22 12:06:13.303962 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-22 12:06:13.303969 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-22 12:06:13.303975 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-22 12:06:13.303981 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 12:06:13.303987 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-22 12:06:13.303993 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-22 12:06:13.303999 | orchestrator | 2025-06-22 12:06:13.304005 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-06-22 12:06:13.304011 | orchestrator | Sunday 22 June 2025 12:00:40 +0000 (0:00:01.593) 0:06:25.258 *********** 2025-06-22 12:06:13.304018 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.304024 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.304030 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.304036 | orchestrator | 2025-06-22 12:06:13.304042 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-06-22 12:06:13.304048 | orchestrator | Sunday 22 June 2025 12:00:41 +0000 (0:00:00.741) 0:06:26.000 *********** 2025-06-22 12:06:13.304054 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.304061 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.304067 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.304073 | orchestrator | 2025-06-22 12:06:13.304079 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-06-22 12:06:13.304085 | orchestrator | Sunday 22 June 2025 12:00:41 +0000 (0:00:00.320) 0:06:26.320 *********** 2025-06-22 12:06:13.304095 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.304102 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.304108 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.304114 | orchestrator | 2025-06-22 12:06:13.304120 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-06-22 12:06:13.304126 | orchestrator | Sunday 22 June 2025 12:00:42 +0000 (0:00:00.300) 0:06:26.620 *********** 2025-06-22 12:06:13.304133 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.304139 | orchestrator | 2025-06-22 12:06:13.304145 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-06-22 12:06:13.304151 | orchestrator | Sunday 22 June 2025 12:00:43 +0000 (0:00:00.864) 0:06:27.484 *********** 2025-06-22 12:06:13.304157 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.304163 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.304169 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.304175 | orchestrator | 2025-06-22 12:06:13.304181 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-06-22 12:06:13.304187 | orchestrator | Sunday 22 June 2025 12:00:43 +0000 (0:00:00.374) 0:06:27.859 *********** 2025-06-22 12:06:13.304194 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.304200 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.304206 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.304212 | orchestrator | 2025-06-22 12:06:13.304218 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-06-22 12:06:13.304224 | orchestrator | Sunday 22 June 2025 12:00:43 +0000 (0:00:00.334) 0:06:28.194 *********** 2025-06-22 12:06:13.304230 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.304237 | orchestrator | 2025-06-22 12:06:13.304243 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-06-22 12:06:13.304249 | orchestrator | Sunday 22 June 2025 12:00:44 +0000 (0:00:00.774) 0:06:28.969 *********** 2025-06-22 12:06:13.304255 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.304261 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.304267 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.304273 | orchestrator | 2025-06-22 12:06:13.304279 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-06-22 12:06:13.304285 | orchestrator | Sunday 22 June 2025 12:00:45 +0000 (0:00:01.352) 0:06:30.321 *********** 2025-06-22 12:06:13.304292 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.304298 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.304304 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.304310 | orchestrator | 2025-06-22 12:06:13.304316 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-06-22 12:06:13.304322 | orchestrator | Sunday 22 June 2025 12:00:47 +0000 (0:00:01.208) 0:06:31.530 *********** 2025-06-22 12:06:13.304328 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.304334 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.304340 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.304346 | orchestrator | 2025-06-22 12:06:13.304353 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-06-22 12:06:13.304359 | orchestrator | Sunday 22 June 2025 12:00:49 +0000 (0:00:02.187) 0:06:33.718 *********** 2025-06-22 12:06:13.304365 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.304371 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.304377 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.304383 | orchestrator | 2025-06-22 12:06:13.304389 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-06-22 12:06:13.304395 | orchestrator | Sunday 22 June 2025 12:00:51 +0000 (0:00:02.045) 0:06:35.763 *********** 2025-06-22 12:06:13.304402 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.304408 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.304417 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-06-22 12:06:13.304423 | orchestrator | 2025-06-22 12:06:13.304429 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-06-22 12:06:13.304436 | orchestrator | Sunday 22 June 2025 12:00:51 +0000 (0:00:00.420) 0:06:36.184 *********** 2025-06-22 12:06:13.304442 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-06-22 12:06:13.304480 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-06-22 12:06:13.304487 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-06-22 12:06:13.304493 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-06-22 12:06:13.304498 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-22 12:06:13.304504 | orchestrator | 2025-06-22 12:06:13.304509 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-06-22 12:06:13.304521 | orchestrator | Sunday 22 June 2025 12:01:16 +0000 (0:00:24.311) 0:07:00.495 *********** 2025-06-22 12:06:13.304526 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-22 12:06:13.304532 | orchestrator | 2025-06-22 12:06:13.304537 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-06-22 12:06:13.304543 | orchestrator | Sunday 22 June 2025 12:01:17 +0000 (0:00:01.623) 0:07:02.119 *********** 2025-06-22 12:06:13.304548 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.304553 | orchestrator | 2025-06-22 12:06:13.304558 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-06-22 12:06:13.304564 | orchestrator | Sunday 22 June 2025 12:01:18 +0000 (0:00:00.891) 0:07:03.010 *********** 2025-06-22 12:06:13.304569 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.304575 | orchestrator | 2025-06-22 12:06:13.304580 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-06-22 12:06:13.304585 | orchestrator | Sunday 22 June 2025 12:01:18 +0000 (0:00:00.160) 0:07:03.171 *********** 2025-06-22 12:06:13.304591 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-06-22 12:06:13.304596 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-06-22 12:06:13.304601 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-06-22 12:06:13.304606 | orchestrator | 2025-06-22 12:06:13.304612 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-06-22 12:06:13.304617 | orchestrator | Sunday 22 June 2025 12:01:25 +0000 (0:00:06.632) 0:07:09.804 *********** 2025-06-22 12:06:13.304622 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-06-22 12:06:13.304628 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-06-22 12:06:13.304633 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-06-22 12:06:13.304638 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-06-22 12:06:13.304644 | orchestrator | 2025-06-22 12:06:13.304649 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 12:06:13.304654 | orchestrator | Sunday 22 June 2025 12:01:30 +0000 (0:00:04.738) 0:07:14.542 *********** 2025-06-22 12:06:13.304660 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.304665 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.304670 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.304676 | orchestrator | 2025-06-22 12:06:13.304681 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-22 12:06:13.304686 | orchestrator | Sunday 22 June 2025 12:01:31 +0000 (0:00:00.998) 0:07:15.541 *********** 2025-06-22 12:06:13.304692 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.304701 | orchestrator | 2025-06-22 12:06:13.304706 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-22 12:06:13.304711 | orchestrator | Sunday 22 June 2025 12:01:31 +0000 (0:00:00.523) 0:07:16.064 *********** 2025-06-22 12:06:13.304717 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.304722 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.304728 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.304733 | orchestrator | 2025-06-22 12:06:13.304738 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-22 12:06:13.304744 | orchestrator | Sunday 22 June 2025 12:01:31 +0000 (0:00:00.316) 0:07:16.381 *********** 2025-06-22 12:06:13.304749 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.304754 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.304760 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.304765 | orchestrator | 2025-06-22 12:06:13.304770 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-22 12:06:13.304776 | orchestrator | Sunday 22 June 2025 12:01:33 +0000 (0:00:01.829) 0:07:18.210 *********** 2025-06-22 12:06:13.304781 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 12:06:13.304786 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 12:06:13.304792 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 12:06:13.304797 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.304802 | orchestrator | 2025-06-22 12:06:13.304807 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-22 12:06:13.304813 | orchestrator | Sunday 22 June 2025 12:01:34 +0000 (0:00:00.653) 0:07:18.863 *********** 2025-06-22 12:06:13.304818 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.304823 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.304829 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.304834 | orchestrator | 2025-06-22 12:06:13.304839 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-06-22 12:06:13.304845 | orchestrator | 2025-06-22 12:06:13.304850 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 12:06:13.304855 | orchestrator | Sunday 22 June 2025 12:01:35 +0000 (0:00:00.550) 0:07:19.413 *********** 2025-06-22 12:06:13.304861 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.304866 | orchestrator | 2025-06-22 12:06:13.304872 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 12:06:13.304896 | orchestrator | Sunday 22 June 2025 12:01:35 +0000 (0:00:00.789) 0:07:20.202 *********** 2025-06-22 12:06:13.304912 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.304918 | orchestrator | 2025-06-22 12:06:13.304923 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 12:06:13.304929 | orchestrator | Sunday 22 June 2025 12:01:36 +0000 (0:00:00.526) 0:07:20.728 *********** 2025-06-22 12:06:13.304934 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.304939 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.304945 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.304950 | orchestrator | 2025-06-22 12:06:13.304955 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 12:06:13.304961 | orchestrator | Sunday 22 June 2025 12:01:36 +0000 (0:00:00.290) 0:07:21.018 *********** 2025-06-22 12:06:13.304966 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.304972 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.304977 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.304982 | orchestrator | 2025-06-22 12:06:13.304988 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 12:06:13.304993 | orchestrator | Sunday 22 June 2025 12:01:37 +0000 (0:00:00.983) 0:07:22.002 *********** 2025-06-22 12:06:13.304998 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.305007 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.305012 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.305018 | orchestrator | 2025-06-22 12:06:13.305023 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 12:06:13.305028 | orchestrator | Sunday 22 June 2025 12:01:38 +0000 (0:00:00.685) 0:07:22.688 *********** 2025-06-22 12:06:13.305034 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.305039 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.305044 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.305050 | orchestrator | 2025-06-22 12:06:13.305055 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 12:06:13.305061 | orchestrator | Sunday 22 June 2025 12:01:38 +0000 (0:00:00.660) 0:07:23.349 *********** 2025-06-22 12:06:13.305066 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.305072 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.305077 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.305082 | orchestrator | 2025-06-22 12:06:13.305087 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 12:06:13.305093 | orchestrator | Sunday 22 June 2025 12:01:39 +0000 (0:00:00.306) 0:07:23.655 *********** 2025-06-22 12:06:13.305098 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.305103 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.305109 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.305114 | orchestrator | 2025-06-22 12:06:13.305119 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 12:06:13.305125 | orchestrator | Sunday 22 June 2025 12:01:39 +0000 (0:00:00.607) 0:07:24.262 *********** 2025-06-22 12:06:13.305130 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.305136 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.305141 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.305146 | orchestrator | 2025-06-22 12:06:13.305152 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 12:06:13.305157 | orchestrator | Sunday 22 June 2025 12:01:40 +0000 (0:00:00.342) 0:07:24.605 *********** 2025-06-22 12:06:13.305162 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.305168 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.305173 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.305179 | orchestrator | 2025-06-22 12:06:13.305184 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 12:06:13.305189 | orchestrator | Sunday 22 June 2025 12:01:40 +0000 (0:00:00.734) 0:07:25.340 *********** 2025-06-22 12:06:13.305195 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.305200 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.305205 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.305211 | orchestrator | 2025-06-22 12:06:13.305216 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 12:06:13.305221 | orchestrator | Sunday 22 June 2025 12:01:41 +0000 (0:00:00.698) 0:07:26.039 *********** 2025-06-22 12:06:13.305227 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.305232 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.305237 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.305243 | orchestrator | 2025-06-22 12:06:13.305248 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 12:06:13.305254 | orchestrator | Sunday 22 June 2025 12:01:42 +0000 (0:00:00.566) 0:07:26.606 *********** 2025-06-22 12:06:13.305259 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.305264 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.305270 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.305275 | orchestrator | 2025-06-22 12:06:13.305280 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 12:06:13.305286 | orchestrator | Sunday 22 June 2025 12:01:42 +0000 (0:00:00.324) 0:07:26.930 *********** 2025-06-22 12:06:13.305291 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.305296 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.305302 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.305310 | orchestrator | 2025-06-22 12:06:13.305316 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 12:06:13.305321 | orchestrator | Sunday 22 June 2025 12:01:42 +0000 (0:00:00.335) 0:07:27.265 *********** 2025-06-22 12:06:13.305326 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.305332 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.305337 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.305342 | orchestrator | 2025-06-22 12:06:13.305348 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 12:06:13.305353 | orchestrator | Sunday 22 June 2025 12:01:43 +0000 (0:00:00.337) 0:07:27.603 *********** 2025-06-22 12:06:13.305358 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.305364 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.305369 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.305374 | orchestrator | 2025-06-22 12:06:13.305379 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 12:06:13.305385 | orchestrator | Sunday 22 June 2025 12:01:43 +0000 (0:00:00.602) 0:07:28.205 *********** 2025-06-22 12:06:13.305393 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.305402 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.305407 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.305413 | orchestrator | 2025-06-22 12:06:13.305418 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 12:06:13.305423 | orchestrator | Sunday 22 June 2025 12:01:44 +0000 (0:00:00.306) 0:07:28.511 *********** 2025-06-22 12:06:13.305429 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.305434 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.305440 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.305445 | orchestrator | 2025-06-22 12:06:13.305450 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 12:06:13.305456 | orchestrator | Sunday 22 June 2025 12:01:44 +0000 (0:00:00.310) 0:07:28.822 *********** 2025-06-22 12:06:13.305461 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.305466 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.305472 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.305477 | orchestrator | 2025-06-22 12:06:13.305482 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 12:06:13.305488 | orchestrator | Sunday 22 June 2025 12:01:44 +0000 (0:00:00.313) 0:07:29.135 *********** 2025-06-22 12:06:13.305493 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.305498 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.305504 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.305509 | orchestrator | 2025-06-22 12:06:13.305514 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 12:06:13.305520 | orchestrator | Sunday 22 June 2025 12:01:45 +0000 (0:00:00.638) 0:07:29.773 *********** 2025-06-22 12:06:13.305525 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.305530 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.305535 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.305541 | orchestrator | 2025-06-22 12:06:13.305546 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-06-22 12:06:13.305551 | orchestrator | Sunday 22 June 2025 12:01:45 +0000 (0:00:00.565) 0:07:30.339 *********** 2025-06-22 12:06:13.305557 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.305562 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.305567 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.305573 | orchestrator | 2025-06-22 12:06:13.305578 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-06-22 12:06:13.305583 | orchestrator | Sunday 22 June 2025 12:01:46 +0000 (0:00:00.361) 0:07:30.701 *********** 2025-06-22 12:06:13.305589 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 12:06:13.305594 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 12:06:13.305600 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 12:06:13.305608 | orchestrator | 2025-06-22 12:06:13.305614 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-06-22 12:06:13.305619 | orchestrator | Sunday 22 June 2025 12:01:47 +0000 (0:00:00.921) 0:07:31.622 *********** 2025-06-22 12:06:13.305624 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.305630 | orchestrator | 2025-06-22 12:06:13.305635 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-06-22 12:06:13.305640 | orchestrator | Sunday 22 June 2025 12:01:48 +0000 (0:00:00.865) 0:07:32.487 *********** 2025-06-22 12:06:13.305646 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.305651 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.305656 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.305662 | orchestrator | 2025-06-22 12:06:13.305667 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-06-22 12:06:13.305672 | orchestrator | Sunday 22 June 2025 12:01:48 +0000 (0:00:00.304) 0:07:32.791 *********** 2025-06-22 12:06:13.305678 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.305683 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.305688 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.305694 | orchestrator | 2025-06-22 12:06:13.305699 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-06-22 12:06:13.305704 | orchestrator | Sunday 22 June 2025 12:01:48 +0000 (0:00:00.306) 0:07:33.098 *********** 2025-06-22 12:06:13.305710 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.305715 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.305720 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.305725 | orchestrator | 2025-06-22 12:06:13.305731 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-06-22 12:06:13.305736 | orchestrator | Sunday 22 June 2025 12:01:49 +0000 (0:00:01.033) 0:07:34.131 *********** 2025-06-22 12:06:13.305741 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.305747 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.305752 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.305757 | orchestrator | 2025-06-22 12:06:13.305763 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-06-22 12:06:13.305768 | orchestrator | Sunday 22 June 2025 12:01:50 +0000 (0:00:00.391) 0:07:34.523 *********** 2025-06-22 12:06:13.305773 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-22 12:06:13.305779 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-22 12:06:13.305784 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-22 12:06:13.305790 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-22 12:06:13.305795 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-22 12:06:13.305800 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-22 12:06:13.305805 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-22 12:06:13.305816 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-22 12:06:13.305822 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-22 12:06:13.305828 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-22 12:06:13.305833 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-22 12:06:13.305838 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-22 12:06:13.305844 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-22 12:06:13.305849 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-22 12:06:13.305858 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-22 12:06:13.305863 | orchestrator | 2025-06-22 12:06:13.305868 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-06-22 12:06:13.305874 | orchestrator | Sunday 22 June 2025 12:01:52 +0000 (0:00:02.370) 0:07:36.893 *********** 2025-06-22 12:06:13.305879 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.305885 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.305890 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.305895 | orchestrator | 2025-06-22 12:06:13.305909 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-06-22 12:06:13.305915 | orchestrator | Sunday 22 June 2025 12:01:52 +0000 (0:00:00.299) 0:07:37.193 *********** 2025-06-22 12:06:13.305920 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.305926 | orchestrator | 2025-06-22 12:06:13.305931 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-06-22 12:06:13.305936 | orchestrator | Sunday 22 June 2025 12:01:53 +0000 (0:00:00.905) 0:07:38.098 *********** 2025-06-22 12:06:13.305942 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-22 12:06:13.305947 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-22 12:06:13.305952 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-06-22 12:06:13.305958 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-22 12:06:13.305963 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-06-22 12:06:13.305968 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-06-22 12:06:13.305974 | orchestrator | 2025-06-22 12:06:13.305979 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-06-22 12:06:13.305984 | orchestrator | Sunday 22 June 2025 12:01:54 +0000 (0:00:01.215) 0:07:39.314 *********** 2025-06-22 12:06:13.305990 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:06:13.305995 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 12:06:13.306000 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 12:06:13.306006 | orchestrator | 2025-06-22 12:06:13.306011 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-06-22 12:06:13.306029 | orchestrator | Sunday 22 June 2025 12:01:57 +0000 (0:00:02.171) 0:07:41.485 *********** 2025-06-22 12:06:13.306035 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 12:06:13.306040 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 12:06:13.306046 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.306051 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 12:06:13.306056 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-22 12:06:13.306061 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.306067 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 12:06:13.306072 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-22 12:06:13.306077 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.306083 | orchestrator | 2025-06-22 12:06:13.306088 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-06-22 12:06:13.306094 | orchestrator | Sunday 22 June 2025 12:01:58 +0000 (0:00:01.455) 0:07:42.941 *********** 2025-06-22 12:06:13.306099 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 12:06:13.306104 | orchestrator | 2025-06-22 12:06:13.306110 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-06-22 12:06:13.306115 | orchestrator | Sunday 22 June 2025 12:02:00 +0000 (0:00:02.280) 0:07:45.222 *********** 2025-06-22 12:06:13.306120 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.306129 | orchestrator | 2025-06-22 12:06:13.306134 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-06-22 12:06:13.306140 | orchestrator | Sunday 22 June 2025 12:02:01 +0000 (0:00:00.552) 0:07:45.774 *********** 2025-06-22 12:06:13.306145 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d90edff2-979c-5e5e-98e2-f02394d35fb4', 'data_vg': 'ceph-d90edff2-979c-5e5e-98e2-f02394d35fb4'}) 2025-06-22 12:06:13.306151 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f', 'data_vg': 'ceph-6ffadd37-6b10-5a4f-8f0b-2da52ae5008f'}) 2025-06-22 12:06:13.306157 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8a4028de-648e-5a19-94a5-5dc0f00dede1', 'data_vg': 'ceph-8a4028de-648e-5a19-94a5-5dc0f00dede1'}) 2025-06-22 12:06:13.306162 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9de1692c-afc0-5cdb-8a59-e564d6a096fc', 'data_vg': 'ceph-9de1692c-afc0-5cdb-8a59-e564d6a096fc'}) 2025-06-22 12:06:13.306171 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0b51a6ec-8722-57c7-ad6b-56758d62ede6', 'data_vg': 'ceph-0b51a6ec-8722-57c7-ad6b-56758d62ede6'}) 2025-06-22 12:06:13.306177 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1d622d46-9f3b-5fb0-a039-cce126484330', 'data_vg': 'ceph-1d622d46-9f3b-5fb0-a039-cce126484330'}) 2025-06-22 12:06:13.306182 | orchestrator | 2025-06-22 12:06:13.306188 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-06-22 12:06:13.306193 | orchestrator | Sunday 22 June 2025 12:02:44 +0000 (0:00:43.151) 0:08:28.926 *********** 2025-06-22 12:06:13.306199 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.306204 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.306229 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.306234 | orchestrator | 2025-06-22 12:06:13.306240 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-06-22 12:06:13.306245 | orchestrator | Sunday 22 June 2025 12:02:45 +0000 (0:00:00.611) 0:08:29.537 *********** 2025-06-22 12:06:13.306250 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.306256 | orchestrator | 2025-06-22 12:06:13.306261 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-06-22 12:06:13.306266 | orchestrator | Sunday 22 June 2025 12:02:45 +0000 (0:00:00.549) 0:08:30.087 *********** 2025-06-22 12:06:13.306272 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.306277 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.306282 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.306288 | orchestrator | 2025-06-22 12:06:13.306293 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-06-22 12:06:13.306298 | orchestrator | Sunday 22 June 2025 12:02:46 +0000 (0:00:00.631) 0:08:30.719 *********** 2025-06-22 12:06:13.306304 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.306309 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.306314 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.306320 | orchestrator | 2025-06-22 12:06:13.306325 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-06-22 12:06:13.306330 | orchestrator | Sunday 22 June 2025 12:02:49 +0000 (0:00:02.852) 0:08:33.571 *********** 2025-06-22 12:06:13.306336 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.306341 | orchestrator | 2025-06-22 12:06:13.306346 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-06-22 12:06:13.306352 | orchestrator | Sunday 22 June 2025 12:02:49 +0000 (0:00:00.552) 0:08:34.124 *********** 2025-06-22 12:06:13.306357 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.306362 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.306368 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.306373 | orchestrator | 2025-06-22 12:06:13.306378 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-06-22 12:06:13.306415 | orchestrator | Sunday 22 June 2025 12:02:50 +0000 (0:00:01.215) 0:08:35.339 *********** 2025-06-22 12:06:13.306421 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.306427 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.306432 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.306437 | orchestrator | 2025-06-22 12:06:13.306443 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-06-22 12:06:13.306448 | orchestrator | Sunday 22 June 2025 12:02:52 +0000 (0:00:01.447) 0:08:36.787 *********** 2025-06-22 12:06:13.306453 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.306458 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.306464 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.306469 | orchestrator | 2025-06-22 12:06:13.306474 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-06-22 12:06:13.306479 | orchestrator | Sunday 22 June 2025 12:02:54 +0000 (0:00:01.822) 0:08:38.610 *********** 2025-06-22 12:06:13.306485 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.306490 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.306495 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.306500 | orchestrator | 2025-06-22 12:06:13.306506 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-06-22 12:06:13.306511 | orchestrator | Sunday 22 June 2025 12:02:54 +0000 (0:00:00.362) 0:08:38.973 *********** 2025-06-22 12:06:13.306516 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.306522 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.306527 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.306532 | orchestrator | 2025-06-22 12:06:13.306537 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-06-22 12:06:13.306543 | orchestrator | Sunday 22 June 2025 12:02:54 +0000 (0:00:00.403) 0:08:39.376 *********** 2025-06-22 12:06:13.306548 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-22 12:06:13.306553 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-06-22 12:06:13.306559 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-06-22 12:06:13.306564 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-06-22 12:06:13.306569 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-06-22 12:06:13.306574 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-06-22 12:06:13.306580 | orchestrator | 2025-06-22 12:06:13.306585 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-06-22 12:06:13.306590 | orchestrator | Sunday 22 June 2025 12:02:56 +0000 (0:00:01.375) 0:08:40.752 *********** 2025-06-22 12:06:13.306596 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-22 12:06:13.306601 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-22 12:06:13.306606 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-06-22 12:06:13.306612 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-06-22 12:06:13.306617 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-22 12:06:13.306622 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-22 12:06:13.306628 | orchestrator | 2025-06-22 12:06:13.306633 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-06-22 12:06:13.306638 | orchestrator | Sunday 22 June 2025 12:02:58 +0000 (0:00:02.158) 0:08:42.910 *********** 2025-06-22 12:06:13.306650 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-22 12:06:13.306656 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-22 12:06:13.306661 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-06-22 12:06:13.306666 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-06-22 12:06:13.306671 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-22 12:06:13.306677 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-22 12:06:13.306682 | orchestrator | 2025-06-22 12:06:13.306687 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-06-22 12:06:13.306692 | orchestrator | Sunday 22 June 2025 12:03:02 +0000 (0:00:04.139) 0:08:47.050 *********** 2025-06-22 12:06:13.306698 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.306706 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.306712 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-22 12:06:13.306717 | orchestrator | 2025-06-22 12:06:13.306722 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-06-22 12:06:13.306727 | orchestrator | Sunday 22 June 2025 12:03:05 +0000 (0:00:03.267) 0:08:50.317 *********** 2025-06-22 12:06:13.306733 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.306738 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.306743 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-06-22 12:06:13.306749 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-22 12:06:13.306754 | orchestrator | 2025-06-22 12:06:13.306760 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-06-22 12:06:13.306765 | orchestrator | Sunday 22 June 2025 12:03:19 +0000 (0:00:13.158) 0:09:03.476 *********** 2025-06-22 12:06:13.306770 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.306775 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.306781 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.306786 | orchestrator | 2025-06-22 12:06:13.306791 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 12:06:13.306797 | orchestrator | Sunday 22 June 2025 12:03:19 +0000 (0:00:00.878) 0:09:04.354 *********** 2025-06-22 12:06:13.306802 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.306807 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.306813 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.306818 | orchestrator | 2025-06-22 12:06:13.306823 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-22 12:06:13.306829 | orchestrator | Sunday 22 June 2025 12:03:20 +0000 (0:00:00.669) 0:09:05.024 *********** 2025-06-22 12:06:13.306834 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.306839 | orchestrator | 2025-06-22 12:06:13.306845 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-22 12:06:13.306850 | orchestrator | Sunday 22 June 2025 12:03:21 +0000 (0:00:00.564) 0:09:05.589 *********** 2025-06-22 12:06:13.306855 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 12:06:13.306861 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 12:06:13.306866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 12:06:13.306871 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.306877 | orchestrator | 2025-06-22 12:06:13.306882 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-22 12:06:13.306887 | orchestrator | Sunday 22 June 2025 12:03:21 +0000 (0:00:00.406) 0:09:05.996 *********** 2025-06-22 12:06:13.306892 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.306898 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.306912 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.306917 | orchestrator | 2025-06-22 12:06:13.306923 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-22 12:06:13.306928 | orchestrator | Sunday 22 June 2025 12:03:21 +0000 (0:00:00.317) 0:09:06.313 *********** 2025-06-22 12:06:13.306934 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.306939 | orchestrator | 2025-06-22 12:06:13.306945 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-22 12:06:13.306950 | orchestrator | Sunday 22 June 2025 12:03:22 +0000 (0:00:00.227) 0:09:06.541 *********** 2025-06-22 12:06:13.306956 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.306961 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.306966 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.306972 | orchestrator | 2025-06-22 12:06:13.306977 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-22 12:06:13.306983 | orchestrator | Sunday 22 June 2025 12:03:22 +0000 (0:00:00.604) 0:09:07.146 *********** 2025-06-22 12:06:13.306994 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.306999 | orchestrator | 2025-06-22 12:06:13.307004 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-22 12:06:13.307010 | orchestrator | Sunday 22 June 2025 12:03:22 +0000 (0:00:00.215) 0:09:07.361 *********** 2025-06-22 12:06:13.307015 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.307021 | orchestrator | 2025-06-22 12:06:13.307026 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-22 12:06:13.307031 | orchestrator | Sunday 22 June 2025 12:03:23 +0000 (0:00:00.230) 0:09:07.592 *********** 2025-06-22 12:06:13.307037 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.307042 | orchestrator | 2025-06-22 12:06:13.307048 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-22 12:06:13.307053 | orchestrator | Sunday 22 June 2025 12:03:23 +0000 (0:00:00.121) 0:09:07.714 *********** 2025-06-22 12:06:13.307059 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.307064 | orchestrator | 2025-06-22 12:06:13.307070 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-22 12:06:13.307075 | orchestrator | Sunday 22 June 2025 12:03:23 +0000 (0:00:00.245) 0:09:07.959 *********** 2025-06-22 12:06:13.307081 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.307086 | orchestrator | 2025-06-22 12:06:13.307092 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-22 12:06:13.307104 | orchestrator | Sunday 22 June 2025 12:03:23 +0000 (0:00:00.250) 0:09:08.209 *********** 2025-06-22 12:06:13.307110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 12:06:13.307116 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 12:06:13.307121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 12:06:13.307127 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.307132 | orchestrator | 2025-06-22 12:06:13.307138 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-22 12:06:13.307143 | orchestrator | Sunday 22 June 2025 12:03:24 +0000 (0:00:00.514) 0:09:08.724 *********** 2025-06-22 12:06:13.307149 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.307154 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.307160 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.307165 | orchestrator | 2025-06-22 12:06:13.307170 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-22 12:06:13.307176 | orchestrator | Sunday 22 June 2025 12:03:24 +0000 (0:00:00.349) 0:09:09.073 *********** 2025-06-22 12:06:13.307181 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.307187 | orchestrator | 2025-06-22 12:06:13.307192 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-22 12:06:13.307197 | orchestrator | Sunday 22 June 2025 12:03:25 +0000 (0:00:00.869) 0:09:09.942 *********** 2025-06-22 12:06:13.307203 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.307208 | orchestrator | 2025-06-22 12:06:13.307214 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-06-22 12:06:13.307219 | orchestrator | 2025-06-22 12:06:13.307225 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 12:06:13.307230 | orchestrator | Sunday 22 June 2025 12:03:26 +0000 (0:00:00.664) 0:09:10.607 *********** 2025-06-22 12:06:13.307236 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.307241 | orchestrator | 2025-06-22 12:06:13.307247 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 12:06:13.307252 | orchestrator | Sunday 22 June 2025 12:03:27 +0000 (0:00:01.215) 0:09:11.823 *********** 2025-06-22 12:06:13.307257 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.307267 | orchestrator | 2025-06-22 12:06:13.307272 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 12:06:13.307278 | orchestrator | Sunday 22 June 2025 12:03:28 +0000 (0:00:01.277) 0:09:13.100 *********** 2025-06-22 12:06:13.307283 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.307288 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.307294 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.307299 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.307305 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.307310 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.307316 | orchestrator | 2025-06-22 12:06:13.307321 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 12:06:13.307327 | orchestrator | Sunday 22 June 2025 12:03:29 +0000 (0:00:01.256) 0:09:14.357 *********** 2025-06-22 12:06:13.307332 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.307338 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.307343 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.307349 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.307355 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.307360 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.307365 | orchestrator | 2025-06-22 12:06:13.307371 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 12:06:13.307376 | orchestrator | Sunday 22 June 2025 12:03:30 +0000 (0:00:00.716) 0:09:15.074 *********** 2025-06-22 12:06:13.307382 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.307387 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.307392 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.307398 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.307403 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.307409 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.307414 | orchestrator | 2025-06-22 12:06:13.307419 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 12:06:13.307425 | orchestrator | Sunday 22 June 2025 12:03:31 +0000 (0:00:00.929) 0:09:16.003 *********** 2025-06-22 12:06:13.307430 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.307436 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.307441 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.307447 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.307452 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.307457 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.307463 | orchestrator | 2025-06-22 12:06:13.307468 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 12:06:13.307474 | orchestrator | Sunday 22 June 2025 12:03:32 +0000 (0:00:00.773) 0:09:16.776 *********** 2025-06-22 12:06:13.307479 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.307484 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.307490 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.307495 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.307501 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.307506 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.307511 | orchestrator | 2025-06-22 12:06:13.307517 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 12:06:13.307522 | orchestrator | Sunday 22 June 2025 12:03:33 +0000 (0:00:01.346) 0:09:18.122 *********** 2025-06-22 12:06:13.307528 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.307533 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.307538 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.307544 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.307549 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.307555 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.307560 | orchestrator | 2025-06-22 12:06:13.307566 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 12:06:13.307577 | orchestrator | Sunday 22 June 2025 12:03:34 +0000 (0:00:00.643) 0:09:18.766 *********** 2025-06-22 12:06:13.307586 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.307591 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.307596 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.307602 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.307607 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.307612 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.307618 | orchestrator | 2025-06-22 12:06:13.307623 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 12:06:13.307628 | orchestrator | Sunday 22 June 2025 12:03:35 +0000 (0:00:00.874) 0:09:19.641 *********** 2025-06-22 12:06:13.307634 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.307639 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.307645 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.307650 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.307655 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.307661 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.307666 | orchestrator | 2025-06-22 12:06:13.307671 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 12:06:13.307677 | orchestrator | Sunday 22 June 2025 12:03:36 +0000 (0:00:01.152) 0:09:20.793 *********** 2025-06-22 12:06:13.307682 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.307687 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.307693 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.307698 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.307703 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.307709 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.307714 | orchestrator | 2025-06-22 12:06:13.307720 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 12:06:13.307725 | orchestrator | Sunday 22 June 2025 12:03:37 +0000 (0:00:01.529) 0:09:22.323 *********** 2025-06-22 12:06:13.307730 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.307736 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.307741 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.307747 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.307752 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.307757 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.307763 | orchestrator | 2025-06-22 12:06:13.307768 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 12:06:13.307774 | orchestrator | Sunday 22 June 2025 12:03:38 +0000 (0:00:00.600) 0:09:22.923 *********** 2025-06-22 12:06:13.307779 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.307784 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.307790 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.307795 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.307800 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.307806 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.307811 | orchestrator | 2025-06-22 12:06:13.307817 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 12:06:13.307822 | orchestrator | Sunday 22 June 2025 12:03:39 +0000 (0:00:00.821) 0:09:23.744 *********** 2025-06-22 12:06:13.307828 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.307833 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.307839 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.307844 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.307849 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.307855 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.307860 | orchestrator | 2025-06-22 12:06:13.307866 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 12:06:13.307871 | orchestrator | Sunday 22 June 2025 12:03:39 +0000 (0:00:00.606) 0:09:24.351 *********** 2025-06-22 12:06:13.307877 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.307882 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.307887 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.307893 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.307926 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.307932 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.307938 | orchestrator | 2025-06-22 12:06:13.307943 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 12:06:13.307948 | orchestrator | Sunday 22 June 2025 12:03:40 +0000 (0:00:00.812) 0:09:25.163 *********** 2025-06-22 12:06:13.307954 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.307959 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.307964 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.307970 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.307975 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.307980 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.307985 | orchestrator | 2025-06-22 12:06:13.307991 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 12:06:13.307996 | orchestrator | Sunday 22 June 2025 12:03:41 +0000 (0:00:00.603) 0:09:25.766 *********** 2025-06-22 12:06:13.308001 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.308007 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.308012 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.308017 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.308022 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.308028 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.308033 | orchestrator | 2025-06-22 12:06:13.308038 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 12:06:13.308043 | orchestrator | Sunday 22 June 2025 12:03:42 +0000 (0:00:00.821) 0:09:26.588 *********** 2025-06-22 12:06:13.308049 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.308054 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.308059 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.308064 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:06:13.308070 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:06:13.308075 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:06:13.308080 | orchestrator | 2025-06-22 12:06:13.308086 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 12:06:13.308091 | orchestrator | Sunday 22 June 2025 12:03:42 +0000 (0:00:00.602) 0:09:27.191 *********** 2025-06-22 12:06:13.308096 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.308101 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.308107 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.308112 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.308117 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.308122 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.308128 | orchestrator | 2025-06-22 12:06:13.308139 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 12:06:13.308145 | orchestrator | Sunday 22 June 2025 12:03:43 +0000 (0:00:00.815) 0:09:28.006 *********** 2025-06-22 12:06:13.308150 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.308155 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.308160 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.308166 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.308171 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.308176 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.308181 | orchestrator | 2025-06-22 12:06:13.308187 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 12:06:13.308192 | orchestrator | Sunday 22 June 2025 12:03:44 +0000 (0:00:00.648) 0:09:28.654 *********** 2025-06-22 12:06:13.308197 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.308202 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.308208 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.308213 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.308218 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.308223 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.308229 | orchestrator | 2025-06-22 12:06:13.308234 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-06-22 12:06:13.308239 | orchestrator | Sunday 22 June 2025 12:03:45 +0000 (0:00:01.297) 0:09:29.952 *********** 2025-06-22 12:06:13.308248 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 12:06:13.308253 | orchestrator | 2025-06-22 12:06:13.308259 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-06-22 12:06:13.308264 | orchestrator | Sunday 22 June 2025 12:03:49 +0000 (0:00:04.239) 0:09:34.191 *********** 2025-06-22 12:06:13.308269 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 12:06:13.308275 | orchestrator | 2025-06-22 12:06:13.308280 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-06-22 12:06:13.308285 | orchestrator | Sunday 22 June 2025 12:03:51 +0000 (0:00:02.062) 0:09:36.253 *********** 2025-06-22 12:06:13.308291 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.308296 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.308301 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.308305 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.308310 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.308315 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.308319 | orchestrator | 2025-06-22 12:06:13.308324 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-06-22 12:06:13.308329 | orchestrator | Sunday 22 June 2025 12:03:53 +0000 (0:00:01.744) 0:09:37.998 *********** 2025-06-22 12:06:13.308333 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.308338 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.308343 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.308347 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.308352 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.308357 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.308361 | orchestrator | 2025-06-22 12:06:13.308366 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-06-22 12:06:13.308371 | orchestrator | Sunday 22 June 2025 12:03:54 +0000 (0:00:01.114) 0:09:39.112 *********** 2025-06-22 12:06:13.308376 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.308381 | orchestrator | 2025-06-22 12:06:13.308385 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-06-22 12:06:13.308390 | orchestrator | Sunday 22 June 2025 12:03:56 +0000 (0:00:01.360) 0:09:40.473 *********** 2025-06-22 12:06:13.308395 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.308399 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.308404 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.308409 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.308413 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.308418 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.308423 | orchestrator | 2025-06-22 12:06:13.308428 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-06-22 12:06:13.308432 | orchestrator | Sunday 22 June 2025 12:03:57 +0000 (0:00:01.868) 0:09:42.342 *********** 2025-06-22 12:06:13.308437 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.308442 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.308447 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.308451 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.308456 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.308461 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.308465 | orchestrator | 2025-06-22 12:06:13.308470 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-06-22 12:06:13.308475 | orchestrator | Sunday 22 June 2025 12:04:01 +0000 (0:00:03.424) 0:09:45.766 *********** 2025-06-22 12:06:13.308480 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:06:13.308485 | orchestrator | 2025-06-22 12:06:13.308489 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-06-22 12:06:13.308498 | orchestrator | Sunday 22 June 2025 12:04:02 +0000 (0:00:01.294) 0:09:47.061 *********** 2025-06-22 12:06:13.308502 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.308507 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.308512 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.308517 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.308521 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.308526 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.308531 | orchestrator | 2025-06-22 12:06:13.308535 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-06-22 12:06:13.308540 | orchestrator | Sunday 22 June 2025 12:04:03 +0000 (0:00:00.959) 0:09:48.020 *********** 2025-06-22 12:06:13.308545 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.308550 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.308554 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:06:13.308559 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:06:13.308564 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:06:13.308568 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.308573 | orchestrator | 2025-06-22 12:06:13.308582 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-06-22 12:06:13.308587 | orchestrator | Sunday 22 June 2025 12:04:06 +0000 (0:00:02.612) 0:09:50.633 *********** 2025-06-22 12:06:13.308592 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.308596 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.308601 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.308606 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:06:13.308611 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:06:13.308615 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:06:13.308620 | orchestrator | 2025-06-22 12:06:13.308625 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-06-22 12:06:13.308629 | orchestrator | 2025-06-22 12:06:13.308634 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 12:06:13.308639 | orchestrator | Sunday 22 June 2025 12:04:07 +0000 (0:00:01.122) 0:09:51.756 *********** 2025-06-22 12:06:13.308644 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.308649 | orchestrator | 2025-06-22 12:06:13.308653 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 12:06:13.308658 | orchestrator | Sunday 22 June 2025 12:04:07 +0000 (0:00:00.555) 0:09:52.311 *********** 2025-06-22 12:06:13.308663 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.308668 | orchestrator | 2025-06-22 12:06:13.308672 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 12:06:13.308677 | orchestrator | Sunday 22 June 2025 12:04:08 +0000 (0:00:00.774) 0:09:53.086 *********** 2025-06-22 12:06:13.308682 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.308686 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.308691 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.308696 | orchestrator | 2025-06-22 12:06:13.308701 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 12:06:13.308705 | orchestrator | Sunday 22 June 2025 12:04:09 +0000 (0:00:00.326) 0:09:53.412 *********** 2025-06-22 12:06:13.308710 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.308715 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.308719 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.308724 | orchestrator | 2025-06-22 12:06:13.308729 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 12:06:13.308733 | orchestrator | Sunday 22 June 2025 12:04:09 +0000 (0:00:00.697) 0:09:54.110 *********** 2025-06-22 12:06:13.308738 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.308743 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.308748 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.308752 | orchestrator | 2025-06-22 12:06:13.308760 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 12:06:13.308765 | orchestrator | Sunday 22 June 2025 12:04:10 +0000 (0:00:00.951) 0:09:55.062 *********** 2025-06-22 12:06:13.308770 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.308775 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.308779 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.308784 | orchestrator | 2025-06-22 12:06:13.308789 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 12:06:13.308793 | orchestrator | Sunday 22 June 2025 12:04:11 +0000 (0:00:00.700) 0:09:55.762 *********** 2025-06-22 12:06:13.308798 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.308803 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.308807 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.308812 | orchestrator | 2025-06-22 12:06:13.308817 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 12:06:13.308822 | orchestrator | Sunday 22 June 2025 12:04:11 +0000 (0:00:00.312) 0:09:56.074 *********** 2025-06-22 12:06:13.308826 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.308831 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.308836 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.308840 | orchestrator | 2025-06-22 12:06:13.308845 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 12:06:13.308850 | orchestrator | Sunday 22 June 2025 12:04:11 +0000 (0:00:00.265) 0:09:56.339 *********** 2025-06-22 12:06:13.308855 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.308859 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.308864 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.308869 | orchestrator | 2025-06-22 12:06:13.308873 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 12:06:13.308878 | orchestrator | Sunday 22 June 2025 12:04:12 +0000 (0:00:00.463) 0:09:56.803 *********** 2025-06-22 12:06:13.308883 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.308887 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.308892 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.308897 | orchestrator | 2025-06-22 12:06:13.308910 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 12:06:13.308915 | orchestrator | Sunday 22 June 2025 12:04:13 +0000 (0:00:00.739) 0:09:57.542 *********** 2025-06-22 12:06:13.308920 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.308925 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.308929 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.308934 | orchestrator | 2025-06-22 12:06:13.308939 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 12:06:13.308943 | orchestrator | Sunday 22 June 2025 12:04:13 +0000 (0:00:00.708) 0:09:58.251 *********** 2025-06-22 12:06:13.308948 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.308953 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.308957 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.308962 | orchestrator | 2025-06-22 12:06:13.308967 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 12:06:13.308972 | orchestrator | Sunday 22 June 2025 12:04:14 +0000 (0:00:00.280) 0:09:58.531 *********** 2025-06-22 12:06:13.308976 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.308981 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.308986 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.308990 | orchestrator | 2025-06-22 12:06:13.308995 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 12:06:13.309004 | orchestrator | Sunday 22 June 2025 12:04:14 +0000 (0:00:00.452) 0:09:58.984 *********** 2025-06-22 12:06:13.309009 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.309014 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.309019 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.309024 | orchestrator | 2025-06-22 12:06:13.309029 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 12:06:13.309037 | orchestrator | Sunday 22 June 2025 12:04:15 +0000 (0:00:00.441) 0:09:59.425 *********** 2025-06-22 12:06:13.309042 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.309047 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.309052 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.309056 | orchestrator | 2025-06-22 12:06:13.309061 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 12:06:13.309066 | orchestrator | Sunday 22 June 2025 12:04:15 +0000 (0:00:00.404) 0:09:59.830 *********** 2025-06-22 12:06:13.309071 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.309075 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.309080 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.309085 | orchestrator | 2025-06-22 12:06:13.309090 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 12:06:13.309095 | orchestrator | Sunday 22 June 2025 12:04:15 +0000 (0:00:00.341) 0:10:00.171 *********** 2025-06-22 12:06:13.309099 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.309104 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.309109 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.309114 | orchestrator | 2025-06-22 12:06:13.309119 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 12:06:13.309123 | orchestrator | Sunday 22 June 2025 12:04:16 +0000 (0:00:00.716) 0:10:00.888 *********** 2025-06-22 12:06:13.309128 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.309133 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.309138 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.309142 | orchestrator | 2025-06-22 12:06:13.309147 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 12:06:13.309152 | orchestrator | Sunday 22 June 2025 12:04:16 +0000 (0:00:00.331) 0:10:01.219 *********** 2025-06-22 12:06:13.309157 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.309161 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.309166 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.309171 | orchestrator | 2025-06-22 12:06:13.309176 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 12:06:13.309181 | orchestrator | Sunday 22 June 2025 12:04:17 +0000 (0:00:00.326) 0:10:01.546 *********** 2025-06-22 12:06:13.309185 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.309190 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.309195 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.309200 | orchestrator | 2025-06-22 12:06:13.309204 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 12:06:13.309209 | orchestrator | Sunday 22 June 2025 12:04:17 +0000 (0:00:00.325) 0:10:01.872 *********** 2025-06-22 12:06:13.309214 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.309219 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.309223 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.309228 | orchestrator | 2025-06-22 12:06:13.309233 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-06-22 12:06:13.309238 | orchestrator | Sunday 22 June 2025 12:04:18 +0000 (0:00:00.944) 0:10:02.816 *********** 2025-06-22 12:06:13.309242 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.309247 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.309252 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-06-22 12:06:13.309257 | orchestrator | 2025-06-22 12:06:13.309262 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-06-22 12:06:13.309267 | orchestrator | Sunday 22 June 2025 12:04:18 +0000 (0:00:00.460) 0:10:03.277 *********** 2025-06-22 12:06:13.309272 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 12:06:13.309276 | orchestrator | 2025-06-22 12:06:13.309281 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-06-22 12:06:13.309286 | orchestrator | Sunday 22 June 2025 12:04:20 +0000 (0:00:02.126) 0:10:05.404 *********** 2025-06-22 12:06:13.309291 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-06-22 12:06:13.309301 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.309306 | orchestrator | 2025-06-22 12:06:13.309310 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-06-22 12:06:13.309315 | orchestrator | Sunday 22 June 2025 12:04:21 +0000 (0:00:00.195) 0:10:05.599 *********** 2025-06-22 12:06:13.309321 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 12:06:13.309329 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 12:06:13.309334 | orchestrator | 2025-06-22 12:06:13.309339 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-06-22 12:06:13.309344 | orchestrator | Sunday 22 June 2025 12:04:30 +0000 (0:00:09.436) 0:10:15.036 *********** 2025-06-22 12:06:13.309349 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 12:06:13.309354 | orchestrator | 2025-06-22 12:06:13.309358 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-06-22 12:06:13.309366 | orchestrator | Sunday 22 June 2025 12:04:34 +0000 (0:00:04.067) 0:10:19.103 *********** 2025-06-22 12:06:13.309373 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.309378 | orchestrator | 2025-06-22 12:06:13.309383 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-06-22 12:06:13.309388 | orchestrator | Sunday 22 June 2025 12:04:35 +0000 (0:00:00.548) 0:10:19.651 *********** 2025-06-22 12:06:13.309393 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-22 12:06:13.309398 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-22 12:06:13.309402 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-22 12:06:13.309407 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-06-22 12:06:13.309412 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-06-22 12:06:13.309417 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-06-22 12:06:13.309421 | orchestrator | 2025-06-22 12:06:13.309426 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-06-22 12:06:13.309431 | orchestrator | Sunday 22 June 2025 12:04:36 +0000 (0:00:01.141) 0:10:20.792 *********** 2025-06-22 12:06:13.309436 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:06:13.309440 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 12:06:13.309445 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 12:06:13.309450 | orchestrator | 2025-06-22 12:06:13.309455 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-06-22 12:06:13.309459 | orchestrator | Sunday 22 June 2025 12:04:39 +0000 (0:00:02.691) 0:10:23.483 *********** 2025-06-22 12:06:13.309464 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 12:06:13.309469 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 12:06:13.309474 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.309479 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 12:06:13.309483 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-22 12:06:13.309488 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.309493 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 12:06:13.309501 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-22 12:06:13.309506 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.309511 | orchestrator | 2025-06-22 12:06:13.309515 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-06-22 12:06:13.309520 | orchestrator | Sunday 22 June 2025 12:04:40 +0000 (0:00:01.617) 0:10:25.101 *********** 2025-06-22 12:06:13.309525 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.309530 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.309535 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.309539 | orchestrator | 2025-06-22 12:06:13.309544 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-06-22 12:06:13.309549 | orchestrator | Sunday 22 June 2025 12:04:43 +0000 (0:00:02.680) 0:10:27.781 *********** 2025-06-22 12:06:13.309554 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.309558 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.309563 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.309568 | orchestrator | 2025-06-22 12:06:13.309572 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-06-22 12:06:13.309577 | orchestrator | Sunday 22 June 2025 12:04:43 +0000 (0:00:00.402) 0:10:28.183 *********** 2025-06-22 12:06:13.309582 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.309587 | orchestrator | 2025-06-22 12:06:13.309592 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-06-22 12:06:13.309596 | orchestrator | Sunday 22 June 2025 12:04:44 +0000 (0:00:00.877) 0:10:29.061 *********** 2025-06-22 12:06:13.309601 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.309606 | orchestrator | 2025-06-22 12:06:13.309611 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-06-22 12:06:13.309615 | orchestrator | Sunday 22 June 2025 12:04:45 +0000 (0:00:00.572) 0:10:29.634 *********** 2025-06-22 12:06:13.309620 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.309625 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.309630 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.309634 | orchestrator | 2025-06-22 12:06:13.309639 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-06-22 12:06:13.309644 | orchestrator | Sunday 22 June 2025 12:04:46 +0000 (0:00:01.231) 0:10:30.865 *********** 2025-06-22 12:06:13.309649 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.309654 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.309659 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.309663 | orchestrator | 2025-06-22 12:06:13.309668 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-06-22 12:06:13.309673 | orchestrator | Sunday 22 June 2025 12:04:48 +0000 (0:00:01.677) 0:10:32.543 *********** 2025-06-22 12:06:13.309678 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.309682 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.309687 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.309692 | orchestrator | 2025-06-22 12:06:13.309697 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-06-22 12:06:13.309701 | orchestrator | Sunday 22 June 2025 12:04:50 +0000 (0:00:01.910) 0:10:34.454 *********** 2025-06-22 12:06:13.309706 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.309711 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.309716 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.309720 | orchestrator | 2025-06-22 12:06:13.309725 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-06-22 12:06:13.309730 | orchestrator | Sunday 22 June 2025 12:04:52 +0000 (0:00:02.162) 0:10:36.617 *********** 2025-06-22 12:06:13.309740 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.309746 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.309750 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.309758 | orchestrator | 2025-06-22 12:06:13.309763 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 12:06:13.309768 | orchestrator | Sunday 22 June 2025 12:04:53 +0000 (0:00:01.491) 0:10:38.108 *********** 2025-06-22 12:06:13.309772 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.309777 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.309782 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.309787 | orchestrator | 2025-06-22 12:06:13.309791 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-22 12:06:13.309796 | orchestrator | Sunday 22 June 2025 12:04:54 +0000 (0:00:00.713) 0:10:38.821 *********** 2025-06-22 12:06:13.309801 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.309806 | orchestrator | 2025-06-22 12:06:13.309811 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-22 12:06:13.309815 | orchestrator | Sunday 22 June 2025 12:04:55 +0000 (0:00:00.999) 0:10:39.821 *********** 2025-06-22 12:06:13.309820 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.309825 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.309830 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.309835 | orchestrator | 2025-06-22 12:06:13.309839 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-22 12:06:13.309844 | orchestrator | Sunday 22 June 2025 12:04:55 +0000 (0:00:00.369) 0:10:40.191 *********** 2025-06-22 12:06:13.309849 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.309854 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.309858 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.309863 | orchestrator | 2025-06-22 12:06:13.309868 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-22 12:06:13.309873 | orchestrator | Sunday 22 June 2025 12:04:57 +0000 (0:00:01.376) 0:10:41.568 *********** 2025-06-22 12:06:13.309878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 12:06:13.309882 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 12:06:13.309887 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 12:06:13.309892 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.309897 | orchestrator | 2025-06-22 12:06:13.309910 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-22 12:06:13.309915 | orchestrator | Sunday 22 June 2025 12:04:58 +0000 (0:00:00.880) 0:10:42.448 *********** 2025-06-22 12:06:13.309920 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.309925 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.309930 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.309935 | orchestrator | 2025-06-22 12:06:13.309939 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-22 12:06:13.309944 | orchestrator | 2025-06-22 12:06:13.309949 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 12:06:13.309954 | orchestrator | Sunday 22 June 2025 12:04:58 +0000 (0:00:00.924) 0:10:43.373 *********** 2025-06-22 12:06:13.309959 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.309964 | orchestrator | 2025-06-22 12:06:13.309968 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 12:06:13.309973 | orchestrator | Sunday 22 June 2025 12:04:59 +0000 (0:00:00.518) 0:10:43.891 *********** 2025-06-22 12:06:13.309978 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.309983 | orchestrator | 2025-06-22 12:06:13.309988 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 12:06:13.309993 | orchestrator | Sunday 22 June 2025 12:05:00 +0000 (0:00:00.732) 0:10:44.624 *********** 2025-06-22 12:06:13.309997 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.310002 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.310010 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.310034 | orchestrator | 2025-06-22 12:06:13.310041 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 12:06:13.310045 | orchestrator | Sunday 22 June 2025 12:05:00 +0000 (0:00:00.323) 0:10:44.948 *********** 2025-06-22 12:06:13.310050 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.310055 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.310060 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.310065 | orchestrator | 2025-06-22 12:06:13.310069 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 12:06:13.310074 | orchestrator | Sunday 22 June 2025 12:05:01 +0000 (0:00:00.689) 0:10:45.638 *********** 2025-06-22 12:06:13.310079 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.310084 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.310088 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.310093 | orchestrator | 2025-06-22 12:06:13.310098 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 12:06:13.310103 | orchestrator | Sunday 22 June 2025 12:05:01 +0000 (0:00:00.709) 0:10:46.348 *********** 2025-06-22 12:06:13.310107 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.310112 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.310117 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.310122 | orchestrator | 2025-06-22 12:06:13.310127 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 12:06:13.310131 | orchestrator | Sunday 22 June 2025 12:05:03 +0000 (0:00:01.136) 0:10:47.485 *********** 2025-06-22 12:06:13.310136 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.310141 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.310146 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.310151 | orchestrator | 2025-06-22 12:06:13.310155 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 12:06:13.310160 | orchestrator | Sunday 22 June 2025 12:05:03 +0000 (0:00:00.321) 0:10:47.806 *********** 2025-06-22 12:06:13.310165 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.310170 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.310179 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.310185 | orchestrator | 2025-06-22 12:06:13.310189 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 12:06:13.310194 | orchestrator | Sunday 22 June 2025 12:05:03 +0000 (0:00:00.318) 0:10:48.124 *********** 2025-06-22 12:06:13.310199 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.310204 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.310209 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.310213 | orchestrator | 2025-06-22 12:06:13.310218 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 12:06:13.310223 | orchestrator | Sunday 22 June 2025 12:05:04 +0000 (0:00:00.296) 0:10:48.421 *********** 2025-06-22 12:06:13.310228 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.310232 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.310237 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.310242 | orchestrator | 2025-06-22 12:06:13.310247 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 12:06:13.310251 | orchestrator | Sunday 22 June 2025 12:05:05 +0000 (0:00:01.049) 0:10:49.470 *********** 2025-06-22 12:06:13.310256 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.310261 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.310266 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.310271 | orchestrator | 2025-06-22 12:06:13.310276 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 12:06:13.310280 | orchestrator | Sunday 22 June 2025 12:05:05 +0000 (0:00:00.759) 0:10:50.230 *********** 2025-06-22 12:06:13.310285 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.310290 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.310295 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.310300 | orchestrator | 2025-06-22 12:06:13.310304 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 12:06:13.310311 | orchestrator | Sunday 22 June 2025 12:05:06 +0000 (0:00:00.306) 0:10:50.536 *********** 2025-06-22 12:06:13.310316 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.310321 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.310326 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.310330 | orchestrator | 2025-06-22 12:06:13.310335 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 12:06:13.310340 | orchestrator | Sunday 22 June 2025 12:05:06 +0000 (0:00:00.294) 0:10:50.831 *********** 2025-06-22 12:06:13.310345 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.310350 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.310354 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.310359 | orchestrator | 2025-06-22 12:06:13.310364 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 12:06:13.310369 | orchestrator | Sunday 22 June 2025 12:05:07 +0000 (0:00:00.660) 0:10:51.491 *********** 2025-06-22 12:06:13.310373 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.310378 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.310383 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.310388 | orchestrator | 2025-06-22 12:06:13.310393 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 12:06:13.310397 | orchestrator | Sunday 22 June 2025 12:05:07 +0000 (0:00:00.338) 0:10:51.830 *********** 2025-06-22 12:06:13.310402 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.310407 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.310412 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.310417 | orchestrator | 2025-06-22 12:06:13.310421 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 12:06:13.310426 | orchestrator | Sunday 22 June 2025 12:05:07 +0000 (0:00:00.330) 0:10:52.160 *********** 2025-06-22 12:06:13.310431 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.310436 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.310441 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.310445 | orchestrator | 2025-06-22 12:06:13.310450 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 12:06:13.310455 | orchestrator | Sunday 22 June 2025 12:05:08 +0000 (0:00:00.291) 0:10:52.452 *********** 2025-06-22 12:06:13.310460 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.310465 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.310469 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.310474 | orchestrator | 2025-06-22 12:06:13.310479 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 12:06:13.310484 | orchestrator | Sunday 22 June 2025 12:05:08 +0000 (0:00:00.600) 0:10:53.052 *********** 2025-06-22 12:06:13.310488 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.310493 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.310498 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.310503 | orchestrator | 2025-06-22 12:06:13.310508 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 12:06:13.310512 | orchestrator | Sunday 22 June 2025 12:05:08 +0000 (0:00:00.307) 0:10:53.360 *********** 2025-06-22 12:06:13.310517 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.310522 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.310527 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.310531 | orchestrator | 2025-06-22 12:06:13.310536 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 12:06:13.310541 | orchestrator | Sunday 22 June 2025 12:05:09 +0000 (0:00:00.322) 0:10:53.683 *********** 2025-06-22 12:06:13.310546 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.310551 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.310555 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.310560 | orchestrator | 2025-06-22 12:06:13.310565 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-06-22 12:06:13.310570 | orchestrator | Sunday 22 June 2025 12:05:10 +0000 (0:00:00.783) 0:10:54.466 *********** 2025-06-22 12:06:13.310577 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.310582 | orchestrator | 2025-06-22 12:06:13.310587 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-22 12:06:13.310592 | orchestrator | Sunday 22 June 2025 12:05:10 +0000 (0:00:00.523) 0:10:54.990 *********** 2025-06-22 12:06:13.310597 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:06:13.310601 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 12:06:13.310610 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 12:06:13.310615 | orchestrator | 2025-06-22 12:06:13.310620 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-22 12:06:13.310625 | orchestrator | Sunday 22 June 2025 12:05:12 +0000 (0:00:02.184) 0:10:57.174 *********** 2025-06-22 12:06:13.310630 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 12:06:13.310635 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 12:06:13.310639 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.310644 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 12:06:13.310649 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-22 12:06:13.310654 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.310659 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 12:06:13.310663 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-22 12:06:13.310668 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.310673 | orchestrator | 2025-06-22 12:06:13.310678 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-06-22 12:06:13.310683 | orchestrator | Sunday 22 June 2025 12:05:14 +0000 (0:00:01.468) 0:10:58.643 *********** 2025-06-22 12:06:13.310687 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.310692 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.310697 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.310702 | orchestrator | 2025-06-22 12:06:13.310706 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-06-22 12:06:13.310711 | orchestrator | Sunday 22 June 2025 12:05:14 +0000 (0:00:00.352) 0:10:58.995 *********** 2025-06-22 12:06:13.310716 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.310721 | orchestrator | 2025-06-22 12:06:13.310726 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-06-22 12:06:13.310730 | orchestrator | Sunday 22 June 2025 12:05:15 +0000 (0:00:00.531) 0:10:59.527 *********** 2025-06-22 12:06:13.310735 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 12:06:13.310740 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 12:06:13.310745 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 12:06:13.310750 | orchestrator | 2025-06-22 12:06:13.310755 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-06-22 12:06:13.310760 | orchestrator | Sunday 22 June 2025 12:05:16 +0000 (0:00:01.448) 0:11:00.975 *********** 2025-06-22 12:06:13.310764 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:06:13.310769 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-22 12:06:13.310774 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:06:13.310779 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-22 12:06:13.310786 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:06:13.310791 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-22 12:06:13.310796 | orchestrator | 2025-06-22 12:06:13.310801 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-22 12:06:13.310806 | orchestrator | Sunday 22 June 2025 12:05:21 +0000 (0:00:04.544) 0:11:05.520 *********** 2025-06-22 12:06:13.310811 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:06:13.310816 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 12:06:13.310820 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:06:13.310825 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 12:06:13.310830 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:06:13.310835 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 12:06:13.310839 | orchestrator | 2025-06-22 12:06:13.310844 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-22 12:06:13.310849 | orchestrator | Sunday 22 June 2025 12:05:23 +0000 (0:00:02.131) 0:11:07.651 *********** 2025-06-22 12:06:13.310854 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 12:06:13.310859 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.310863 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 12:06:13.310868 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.310873 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 12:06:13.310878 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.310882 | orchestrator | 2025-06-22 12:06:13.310887 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-06-22 12:06:13.310892 | orchestrator | Sunday 22 June 2025 12:05:24 +0000 (0:00:01.338) 0:11:08.990 *********** 2025-06-22 12:06:13.310897 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-06-22 12:06:13.310909 | orchestrator | 2025-06-22 12:06:13.310914 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-06-22 12:06:13.310923 | orchestrator | Sunday 22 June 2025 12:05:24 +0000 (0:00:00.237) 0:11:09.227 *********** 2025-06-22 12:06:13.310928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 12:06:13.310933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 12:06:13.310938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 12:06:13.310943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 12:06:13.310948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 12:06:13.310953 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.310958 | orchestrator | 2025-06-22 12:06:13.310962 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-06-22 12:06:13.310967 | orchestrator | Sunday 22 June 2025 12:05:25 +0000 (0:00:01.172) 0:11:10.400 *********** 2025-06-22 12:06:13.310972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 12:06:13.310977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 12:06:13.310982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 12:06:13.310989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 12:06:13.310994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 12:06:13.310999 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.311003 | orchestrator | 2025-06-22 12:06:13.311008 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-06-22 12:06:13.311013 | orchestrator | Sunday 22 June 2025 12:05:26 +0000 (0:00:00.613) 0:11:11.013 *********** 2025-06-22 12:06:13.311018 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 12:06:13.311023 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 12:06:13.311028 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 12:06:13.311033 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 12:06:13.311038 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 12:06:13.311042 | orchestrator | 2025-06-22 12:06:13.311047 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-06-22 12:06:13.311052 | orchestrator | Sunday 22 June 2025 12:05:58 +0000 (0:00:32.161) 0:11:43.175 *********** 2025-06-22 12:06:13.311057 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.311062 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.311066 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.311071 | orchestrator | 2025-06-22 12:06:13.311076 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-06-22 12:06:13.311081 | orchestrator | Sunday 22 June 2025 12:05:59 +0000 (0:00:00.358) 0:11:43.533 *********** 2025-06-22 12:06:13.311086 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.311091 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.311095 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.311100 | orchestrator | 2025-06-22 12:06:13.311105 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-06-22 12:06:13.311110 | orchestrator | Sunday 22 June 2025 12:05:59 +0000 (0:00:00.359) 0:11:43.893 *********** 2025-06-22 12:06:13.311114 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.311119 | orchestrator | 2025-06-22 12:06:13.311124 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-06-22 12:06:13.311129 | orchestrator | Sunday 22 June 2025 12:06:00 +0000 (0:00:00.831) 0:11:44.724 *********** 2025-06-22 12:06:13.311134 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.311139 | orchestrator | 2025-06-22 12:06:13.311144 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-06-22 12:06:13.311148 | orchestrator | Sunday 22 June 2025 12:06:00 +0000 (0:00:00.593) 0:11:45.318 *********** 2025-06-22 12:06:13.311153 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.311158 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.311163 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.311168 | orchestrator | 2025-06-22 12:06:13.311177 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-06-22 12:06:13.311182 | orchestrator | Sunday 22 June 2025 12:06:02 +0000 (0:00:01.349) 0:11:46.667 *********** 2025-06-22 12:06:13.311189 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.311194 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.311199 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.311204 | orchestrator | 2025-06-22 12:06:13.311209 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-06-22 12:06:13.311213 | orchestrator | Sunday 22 June 2025 12:06:03 +0000 (0:00:01.541) 0:11:48.209 *********** 2025-06-22 12:06:13.311218 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:06:13.311223 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:06:13.311228 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:06:13.311233 | orchestrator | 2025-06-22 12:06:13.311238 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-06-22 12:06:13.311242 | orchestrator | Sunday 22 June 2025 12:06:05 +0000 (0:00:02.087) 0:11:50.297 *********** 2025-06-22 12:06:13.311247 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 12:06:13.311252 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 12:06:13.311257 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 12:06:13.311262 | orchestrator | 2025-06-22 12:06:13.311275 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 12:06:13.311280 | orchestrator | Sunday 22 June 2025 12:06:08 +0000 (0:00:02.774) 0:11:53.072 *********** 2025-06-22 12:06:13.311285 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.311290 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.311295 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.311299 | orchestrator | 2025-06-22 12:06:13.311304 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-22 12:06:13.311309 | orchestrator | Sunday 22 June 2025 12:06:08 +0000 (0:00:00.311) 0:11:53.383 *********** 2025-06-22 12:06:13.311314 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:06:13.311319 | orchestrator | 2025-06-22 12:06:13.311323 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-22 12:06:13.311328 | orchestrator | Sunday 22 June 2025 12:06:09 +0000 (0:00:00.454) 0:11:53.837 *********** 2025-06-22 12:06:13.311333 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.311338 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.311342 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.311347 | orchestrator | 2025-06-22 12:06:13.311352 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-22 12:06:13.311357 | orchestrator | Sunday 22 June 2025 12:06:09 +0000 (0:00:00.423) 0:11:54.261 *********** 2025-06-22 12:06:13.311362 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.311367 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:06:13.311371 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:06:13.311376 | orchestrator | 2025-06-22 12:06:13.311381 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-22 12:06:13.311385 | orchestrator | Sunday 22 June 2025 12:06:10 +0000 (0:00:00.295) 0:11:54.556 *********** 2025-06-22 12:06:13.311390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 12:06:13.311395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 12:06:13.311400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 12:06:13.311405 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:06:13.311409 | orchestrator | 2025-06-22 12:06:13.311414 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-22 12:06:13.311419 | orchestrator | Sunday 22 June 2025 12:06:10 +0000 (0:00:00.465) 0:11:55.021 *********** 2025-06-22 12:06:13.311424 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:06:13.311431 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:06:13.311436 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:06:13.311441 | orchestrator | 2025-06-22 12:06:13.311446 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:06:13.311450 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-06-22 12:06:13.311455 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-06-22 12:06:13.311460 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-06-22 12:06:13.311465 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-06-22 12:06:13.311470 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-06-22 12:06:13.311475 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-06-22 12:06:13.311479 | orchestrator | 2025-06-22 12:06:13.311484 | orchestrator | 2025-06-22 12:06:13.311489 | orchestrator | 2025-06-22 12:06:13.311494 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:06:13.311499 | orchestrator | Sunday 22 June 2025 12:06:10 +0000 (0:00:00.215) 0:11:55.237 *********** 2025-06-22 12:06:13.311508 | orchestrator | =============================================================================== 2025-06-22 12:06:13.311513 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------ 101.82s 2025-06-22 12:06:13.311518 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.15s 2025-06-22 12:06:13.311523 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.16s 2025-06-22 12:06:13.311527 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.31s 2025-06-22 12:06:13.311532 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.90s 2025-06-22 12:06:13.311537 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.20s 2025-06-22 12:06:13.311542 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.16s 2025-06-22 12:06:13.311546 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.96s 2025-06-22 12:06:13.311551 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.24s 2025-06-22 12:06:13.311556 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.44s 2025-06-22 12:06:13.311561 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.81s 2025-06-22 12:06:13.311566 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.63s 2025-06-22 12:06:13.311570 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.74s 2025-06-22 12:06:13.311575 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.54s 2025-06-22 12:06:13.311580 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.24s 2025-06-22 12:06:13.311585 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.14s 2025-06-22 12:06:13.311589 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.07s 2025-06-22 12:06:13.311594 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.82s 2025-06-22 12:06:13.311599 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.42s 2025-06-22 12:06:13.311604 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.34s 2025-06-22 12:06:13.311609 | orchestrator | 2025-06-22 12:06:13 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:13.311617 | orchestrator | 2025-06-22 12:06:13 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:06:13.311622 | orchestrator | 2025-06-22 12:06:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:16.327690 | orchestrator | 2025-06-22 12:06:16 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:16.329967 | orchestrator | 2025-06-22 12:06:16 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:16.331618 | orchestrator | 2025-06-22 12:06:16 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:06:16.331647 | orchestrator | 2025-06-22 12:06:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:19.374221 | orchestrator | 2025-06-22 12:06:19 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:19.374800 | orchestrator | 2025-06-22 12:06:19 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:19.378502 | orchestrator | 2025-06-22 12:06:19 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:06:19.378562 | orchestrator | 2025-06-22 12:06:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:22.433460 | orchestrator | 2025-06-22 12:06:22 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:22.436335 | orchestrator | 2025-06-22 12:06:22 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:22.437866 | orchestrator | 2025-06-22 12:06:22 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:06:22.438093 | orchestrator | 2025-06-22 12:06:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:25.477062 | orchestrator | 2025-06-22 12:06:25 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:25.477161 | orchestrator | 2025-06-22 12:06:25 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:25.477449 | orchestrator | 2025-06-22 12:06:25 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:06:25.477762 | orchestrator | 2025-06-22 12:06:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:28.528540 | orchestrator | 2025-06-22 12:06:28 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:28.531226 | orchestrator | 2025-06-22 12:06:28 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:28.533117 | orchestrator | 2025-06-22 12:06:28 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:06:28.533146 | orchestrator | 2025-06-22 12:06:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:31.576368 | orchestrator | 2025-06-22 12:06:31 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:31.579724 | orchestrator | 2025-06-22 12:06:31 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:31.579756 | orchestrator | 2025-06-22 12:06:31 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:06:31.580284 | orchestrator | 2025-06-22 12:06:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:34.626270 | orchestrator | 2025-06-22 12:06:34 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:34.628725 | orchestrator | 2025-06-22 12:06:34 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:34.631262 | orchestrator | 2025-06-22 12:06:34 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:06:34.631293 | orchestrator | 2025-06-22 12:06:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:37.680479 | orchestrator | 2025-06-22 12:06:37 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:37.682522 | orchestrator | 2025-06-22 12:06:37 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:37.684299 | orchestrator | 2025-06-22 12:06:37 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:06:37.684526 | orchestrator | 2025-06-22 12:06:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:40.738266 | orchestrator | 2025-06-22 12:06:40 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:40.739887 | orchestrator | 2025-06-22 12:06:40 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:40.741927 | orchestrator | 2025-06-22 12:06:40 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:06:40.741954 | orchestrator | 2025-06-22 12:06:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:43.800083 | orchestrator | 2025-06-22 12:06:43 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:43.801943 | orchestrator | 2025-06-22 12:06:43 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:43.803490 | orchestrator | 2025-06-22 12:06:43 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:06:43.803529 | orchestrator | 2025-06-22 12:06:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:46.862846 | orchestrator | 2025-06-22 12:06:46 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:46.865115 | orchestrator | 2025-06-22 12:06:46 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:46.866782 | orchestrator | 2025-06-22 12:06:46 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:06:46.867100 | orchestrator | 2025-06-22 12:06:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:49.923954 | orchestrator | 2025-06-22 12:06:49 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:49.927196 | orchestrator | 2025-06-22 12:06:49 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:49.928553 | orchestrator | 2025-06-22 12:06:49 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:06:49.929167 | orchestrator | 2025-06-22 12:06:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:52.981223 | orchestrator | 2025-06-22 12:06:52 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:52.983584 | orchestrator | 2025-06-22 12:06:52 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:52.987197 | orchestrator | 2025-06-22 12:06:52 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:06:52.987320 | orchestrator | 2025-06-22 12:06:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:56.029110 | orchestrator | 2025-06-22 12:06:56 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:56.030223 | orchestrator | 2025-06-22 12:06:56 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:56.032565 | orchestrator | 2025-06-22 12:06:56 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:06:56.032859 | orchestrator | 2025-06-22 12:06:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:06:59.077322 | orchestrator | 2025-06-22 12:06:59 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:06:59.078916 | orchestrator | 2025-06-22 12:06:59 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:06:59.080063 | orchestrator | 2025-06-22 12:06:59 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:06:59.080095 | orchestrator | 2025-06-22 12:06:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:07:02.137774 | orchestrator | 2025-06-22 12:07:02 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:07:02.139191 | orchestrator | 2025-06-22 12:07:02 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:07:02.141067 | orchestrator | 2025-06-22 12:07:02 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:07:02.141100 | orchestrator | 2025-06-22 12:07:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:07:05.189611 | orchestrator | 2025-06-22 12:07:05 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:07:05.190592 | orchestrator | 2025-06-22 12:07:05 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:07:05.192574 | orchestrator | 2025-06-22 12:07:05 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:07:05.192601 | orchestrator | 2025-06-22 12:07:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:07:08.236003 | orchestrator | 2025-06-22 12:07:08 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:07:08.238804 | orchestrator | 2025-06-22 12:07:08 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:07:08.241301 | orchestrator | 2025-06-22 12:07:08 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:07:08.241328 | orchestrator | 2025-06-22 12:07:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:07:11.284614 | orchestrator | 2025-06-22 12:07:11 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state STARTED 2025-06-22 12:07:11.285698 | orchestrator | 2025-06-22 12:07:11 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state STARTED 2025-06-22 12:07:11.288600 | orchestrator | 2025-06-22 12:07:11 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:07:11.289448 | orchestrator | 2025-06-22 12:07:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:07:14.339006 | orchestrator | 2025-06-22 12:07:14 | INFO  | Task f1810ee1-6bc8-4953-947d-e739db3b5c4e is in state SUCCESS 2025-06-22 12:07:14.340094 | orchestrator | 2025-06-22 12:07:14.340144 | orchestrator | 2025-06-22 12:07:14.340159 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:07:14.340171 | orchestrator | 2025-06-22 12:07:14.340183 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:07:14.340194 | orchestrator | Sunday 22 June 2025 12:04:02 +0000 (0:00:00.254) 0:00:00.254 *********** 2025-06-22 12:07:14.340206 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:07:14.340591 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:07:14.340606 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:07:14.340617 | orchestrator | 2025-06-22 12:07:14.340629 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:07:14.340640 | orchestrator | Sunday 22 June 2025 12:04:03 +0000 (0:00:00.404) 0:00:00.659 *********** 2025-06-22 12:07:14.340652 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-06-22 12:07:14.340689 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-06-22 12:07:14.340701 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-06-22 12:07:14.340712 | orchestrator | 2025-06-22 12:07:14.340723 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-06-22 12:07:14.340733 | orchestrator | 2025-06-22 12:07:14.340744 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-22 12:07:14.340754 | orchestrator | Sunday 22 June 2025 12:04:03 +0000 (0:00:00.432) 0:00:01.092 *********** 2025-06-22 12:07:14.340765 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:07:14.340776 | orchestrator | 2025-06-22 12:07:14.340787 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-06-22 12:07:14.340798 | orchestrator | Sunday 22 June 2025 12:04:04 +0000 (0:00:00.505) 0:00:01.597 *********** 2025-06-22 12:07:14.340809 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 12:07:14.340819 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 12:07:14.340830 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 12:07:14.340840 | orchestrator | 2025-06-22 12:07:14.340864 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-06-22 12:07:14.340875 | orchestrator | Sunday 22 June 2025 12:04:04 +0000 (0:00:00.696) 0:00:02.294 *********** 2025-06-22 12:07:14.340890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 12:07:14.340906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 12:07:14.340932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 12:07:14.340956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 12:07:14.340977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 12:07:14.340991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 12:07:14.341002 | orchestrator | 2025-06-22 12:07:14.341014 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-22 12:07:14.341025 | orchestrator | Sunday 22 June 2025 12:04:06 +0000 (0:00:01.736) 0:00:04.031 *********** 2025-06-22 12:07:14.341068 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:07:14.341082 | orchestrator | 2025-06-22 12:07:14.341093 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-06-22 12:07:14.341104 | orchestrator | Sunday 22 June 2025 12:04:07 +0000 (0:00:00.583) 0:00:04.614 *********** 2025-06-22 12:07:14.341138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 12:07:14.341161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 12:07:14.341188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 12:07:14.341201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 12:07:14.341226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 12:07:14.341249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 12:07:14.341262 | orchestrator | 2025-06-22 12:07:14.341275 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-06-22 12:07:14.341292 | orchestrator | Sunday 22 June 2025 12:04:10 +0000 (0:00:02.914) 0:00:07.529 *********** 2025-06-22 12:07:14.341306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 12:07:14.341320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 12:07:14.341340 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:07:14.341362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 12:07:14.341376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 12:07:14.341390 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.341410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 12:07:14.341424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 12:07:14.341444 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.341455 | orchestrator | 2025-06-22 12:07:14.341466 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-06-22 12:07:14.341477 | orchestrator | Sunday 22 June 2025 12:04:10 +0000 (0:00:00.974) 0:00:08.504 *********** 2025-06-22 12:07:14.341495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 12:07:14.341507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 12:07:14.341519 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:07:14.341535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 12:07:14.341547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 12:07:14.341571 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.341588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 12:07:14.341600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 12:07:14.341612 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.341623 | orchestrator | 2025-06-22 12:07:14.341634 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-06-22 12:07:14.341645 | orchestrator | Sunday 22 June 2025 12:04:11 +0000 (0:00:00.904) 0:00:09.408 *********** 2025-06-22 12:07:14.341661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 12:07:14.341673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 12:07:14.341697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 12:07:14.341716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 12:07:14.341734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 12:07:14.341746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 12:07:14.341764 | orchestrator | 2025-06-22 12:07:14.341775 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-06-22 12:07:14.341786 | orchestrator | Sunday 22 June 2025 12:04:14 +0000 (0:00:02.363) 0:00:11.772 *********** 2025-06-22 12:07:14.341797 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:07:14.341808 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:07:14.341819 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:07:14.341829 | orchestrator | 2025-06-22 12:07:14.341840 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-06-22 12:07:14.341851 | orchestrator | Sunday 22 June 2025 12:04:17 +0000 (0:00:03.271) 0:00:15.044 *********** 2025-06-22 12:07:14.341862 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:07:14.341872 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:07:14.341883 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:07:14.341893 | orchestrator | 2025-06-22 12:07:14.341904 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-06-22 12:07:14.341915 | orchestrator | Sunday 22 June 2025 12:04:19 +0000 (0:00:01.990) 0:00:17.034 *********** 2025-06-22 12:07:14.341934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 12:07:14.341946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 12:07:14.341963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 12:07:14.341981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 12:07:14.342000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 12:07:14.342094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 12:07:14.342112 | orchestrator | 2025-06-22 12:07:14.342124 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-22 12:07:14.342135 | orchestrator | Sunday 22 June 2025 12:04:21 +0000 (0:00:02.161) 0:00:19.196 *********** 2025-06-22 12:07:14.342146 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:07:14.342157 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.342174 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.342185 | orchestrator | 2025-06-22 12:07:14.342196 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-22 12:07:14.342207 | orchestrator | Sunday 22 June 2025 12:04:21 +0000 (0:00:00.289) 0:00:19.485 *********** 2025-06-22 12:07:14.342217 | orchestrator | 2025-06-22 12:07:14.342237 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-22 12:07:14.342248 | orchestrator | Sunday 22 June 2025 12:04:22 +0000 (0:00:00.068) 0:00:19.554 *********** 2025-06-22 12:07:14.342259 | orchestrator | 2025-06-22 12:07:14.342270 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-22 12:07:14.342281 | orchestrator | Sunday 22 June 2025 12:04:22 +0000 (0:00:00.065) 0:00:19.619 *********** 2025-06-22 12:07:14.342291 | orchestrator | 2025-06-22 12:07:14.342302 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-06-22 12:07:14.342313 | orchestrator | Sunday 22 June 2025 12:04:22 +0000 (0:00:00.278) 0:00:19.897 *********** 2025-06-22 12:07:14.342324 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:07:14.342334 | orchestrator | 2025-06-22 12:07:14.342345 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-06-22 12:07:14.342356 | orchestrator | Sunday 22 June 2025 12:04:22 +0000 (0:00:00.206) 0:00:20.104 *********** 2025-06-22 12:07:14.342367 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:07:14.342377 | orchestrator | 2025-06-22 12:07:14.342388 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-06-22 12:07:14.342399 | orchestrator | Sunday 22 June 2025 12:04:22 +0000 (0:00:00.236) 0:00:20.341 *********** 2025-06-22 12:07:14.342410 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:07:14.342421 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:07:14.342432 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:07:14.342443 | orchestrator | 2025-06-22 12:07:14.342454 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-06-22 12:07:14.342465 | orchestrator | Sunday 22 June 2025 12:05:41 +0000 (0:01:19.033) 0:01:39.374 *********** 2025-06-22 12:07:14.342475 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:07:14.342486 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:07:14.342497 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:07:14.342508 | orchestrator | 2025-06-22 12:07:14.342518 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-22 12:07:14.342529 | orchestrator | Sunday 22 June 2025 12:07:02 +0000 (0:01:20.733) 0:03:00.108 *********** 2025-06-22 12:07:14.342540 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:07:14.342551 | orchestrator | 2025-06-22 12:07:14.342562 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-06-22 12:07:14.342573 | orchestrator | Sunday 22 June 2025 12:07:03 +0000 (0:00:00.683) 0:03:00.792 *********** 2025-06-22 12:07:14.342584 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:07:14.342595 | orchestrator | 2025-06-22 12:07:14.342606 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-06-22 12:07:14.342616 | orchestrator | Sunday 22 June 2025 12:07:05 +0000 (0:00:02.401) 0:03:03.193 *********** 2025-06-22 12:07:14.342627 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:07:14.342638 | orchestrator | 2025-06-22 12:07:14.342649 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-06-22 12:07:14.342660 | orchestrator | Sunday 22 June 2025 12:07:08 +0000 (0:00:02.359) 0:03:05.552 *********** 2025-06-22 12:07:14.342670 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:07:14.342681 | orchestrator | 2025-06-22 12:07:14.342692 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-06-22 12:07:14.342703 | orchestrator | Sunday 22 June 2025 12:07:10 +0000 (0:00:02.909) 0:03:08.461 *********** 2025-06-22 12:07:14.342714 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:07:14.342725 | orchestrator | 2025-06-22 12:07:14.342745 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:07:14.342758 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 12:07:14.342770 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 12:07:14.342791 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 12:07:14.342803 | orchestrator | 2025-06-22 12:07:14.342814 | orchestrator | 2025-06-22 12:07:14.342824 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:07:14.342835 | orchestrator | Sunday 22 June 2025 12:07:13 +0000 (0:00:02.425) 0:03:10.887 *********** 2025-06-22 12:07:14.342846 | orchestrator | =============================================================================== 2025-06-22 12:07:14.342857 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 80.73s 2025-06-22 12:07:14.342868 | orchestrator | opensearch : Restart opensearch container ------------------------------ 79.03s 2025-06-22 12:07:14.342878 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.27s 2025-06-22 12:07:14.342889 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.91s 2025-06-22 12:07:14.342900 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.91s 2025-06-22 12:07:14.342911 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.43s 2025-06-22 12:07:14.342922 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.40s 2025-06-22 12:07:14.342933 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.36s 2025-06-22 12:07:14.342944 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.36s 2025-06-22 12:07:14.342959 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.16s 2025-06-22 12:07:14.342970 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.99s 2025-06-22 12:07:14.342981 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.74s 2025-06-22 12:07:14.342992 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.97s 2025-06-22 12:07:14.343003 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.90s 2025-06-22 12:07:14.343014 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.70s 2025-06-22 12:07:14.343025 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.68s 2025-06-22 12:07:14.343089 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.58s 2025-06-22 12:07:14.343103 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2025-06-22 12:07:14.343114 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-06-22 12:07:14.343125 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.41s 2025-06-22 12:07:14.346477 | orchestrator | 2025-06-22 12:07:14 | INFO  | Task bdcb2df4-1979-4870-bb73-1e34088ad6a5 is in state SUCCESS 2025-06-22 12:07:14.347393 | orchestrator | 2025-06-22 12:07:14.349144 | orchestrator | 2025-06-22 12:07:14.349175 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-06-22 12:07:14.349187 | orchestrator | 2025-06-22 12:07:14.349198 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-22 12:07:14.349612 | orchestrator | Sunday 22 June 2025 12:04:02 +0000 (0:00:00.097) 0:00:00.097 *********** 2025-06-22 12:07:14.349626 | orchestrator | ok: [localhost] => { 2025-06-22 12:07:14.349638 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-06-22 12:07:14.349650 | orchestrator | } 2025-06-22 12:07:14.349661 | orchestrator | 2025-06-22 12:07:14.349772 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-06-22 12:07:14.349787 | orchestrator | Sunday 22 June 2025 12:04:02 +0000 (0:00:00.047) 0:00:00.145 *********** 2025-06-22 12:07:14.349799 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-06-22 12:07:14.349825 | orchestrator | ...ignoring 2025-06-22 12:07:14.349836 | orchestrator | 2025-06-22 12:07:14.349847 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-06-22 12:07:14.349858 | orchestrator | Sunday 22 June 2025 12:04:05 +0000 (0:00:02.846) 0:00:02.991 *********** 2025-06-22 12:07:14.349869 | orchestrator | skipping: [localhost] 2025-06-22 12:07:14.349879 | orchestrator | 2025-06-22 12:07:14.349909 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-06-22 12:07:14.349920 | orchestrator | Sunday 22 June 2025 12:04:05 +0000 (0:00:00.049) 0:00:03.041 *********** 2025-06-22 12:07:14.349931 | orchestrator | ok: [localhost] 2025-06-22 12:07:14.349942 | orchestrator | 2025-06-22 12:07:14.349954 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:07:14.349964 | orchestrator | 2025-06-22 12:07:14.349975 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:07:14.349986 | orchestrator | Sunday 22 June 2025 12:04:05 +0000 (0:00:00.153) 0:00:03.195 *********** 2025-06-22 12:07:14.349997 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:07:14.350008 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:07:14.350087 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:07:14.350098 | orchestrator | 2025-06-22 12:07:14.350109 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:07:14.350120 | orchestrator | Sunday 22 June 2025 12:04:06 +0000 (0:00:00.303) 0:00:03.499 *********** 2025-06-22 12:07:14.350131 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-22 12:07:14.350143 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-22 12:07:14.350154 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-22 12:07:14.350165 | orchestrator | 2025-06-22 12:07:14.350176 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-22 12:07:14.350187 | orchestrator | 2025-06-22 12:07:14.350198 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-22 12:07:14.350209 | orchestrator | Sunday 22 June 2025 12:04:06 +0000 (0:00:00.767) 0:00:04.266 *********** 2025-06-22 12:07:14.350220 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 12:07:14.350231 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-22 12:07:14.350242 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-22 12:07:14.350253 | orchestrator | 2025-06-22 12:07:14.350264 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 12:07:14.350274 | orchestrator | Sunday 22 June 2025 12:04:07 +0000 (0:00:00.406) 0:00:04.673 *********** 2025-06-22 12:07:14.350285 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:07:14.350297 | orchestrator | 2025-06-22 12:07:14.350308 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-06-22 12:07:14.350319 | orchestrator | Sunday 22 June 2025 12:04:07 +0000 (0:00:00.600) 0:00:05.274 *********** 2025-06-22 12:07:14.350355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 12:07:14.350380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 12:07:14.350399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 12:07:14.350422 | orchestrator | 2025-06-22 12:07:14.350443 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-06-22 12:07:14.350456 | orchestrator | Sunday 22 June 2025 12:04:10 +0000 (0:00:02.930) 0:00:08.204 *********** 2025-06-22 12:07:14.350469 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.350483 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.350496 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:07:14.350508 | orchestrator | 2025-06-22 12:07:14.350520 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-06-22 12:07:14.350533 | orchestrator | Sunday 22 June 2025 12:04:11 +0000 (0:00:00.618) 0:00:08.823 *********** 2025-06-22 12:07:14.350545 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.350557 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.350570 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:07:14.350582 | orchestrator | 2025-06-22 12:07:14.350595 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-06-22 12:07:14.350607 | orchestrator | Sunday 22 June 2025 12:04:12 +0000 (0:00:01.388) 0:00:10.211 *********** 2025-06-22 12:07:14.350621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 12:07:14.350648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 12:07:14.350677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 12:07:14.350690 | orchestrator | 2025-06-22 12:07:14.350703 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-06-22 12:07:14.350715 | orchestrator | Sunday 22 June 2025 12:04:16 +0000 (0:00:03.859) 0:00:14.071 *********** 2025-06-22 12:07:14.350728 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.350741 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.350755 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:07:14.350768 | orchestrator | 2025-06-22 12:07:14.350779 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-06-22 12:07:14.350790 | orchestrator | Sunday 22 June 2025 12:04:17 +0000 (0:00:01.145) 0:00:15.217 *********** 2025-06-22 12:07:14.350801 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:07:14.350812 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:07:14.350823 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:07:14.350834 | orchestrator | 2025-06-22 12:07:14.350845 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 12:07:14.350862 | orchestrator | Sunday 22 June 2025 12:04:22 +0000 (0:00:04.601) 0:00:19.819 *********** 2025-06-22 12:07:14.350874 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:07:14.350885 | orchestrator | 2025-06-22 12:07:14.350896 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-22 12:07:14.350911 | orchestrator | Sunday 22 June 2025 12:04:22 +0000 (0:00:00.502) 0:00:20.321 *********** 2025-06-22 12:07:14.350931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 12:07:14.350944 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.350956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 12:07:14.350975 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:07:14.350999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 12:07:14.351012 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.351023 | orchestrator | 2025-06-22 12:07:14.351033 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-22 12:07:14.351062 | orchestrator | Sunday 22 June 2025 12:04:26 +0000 (0:00:03.691) 0:00:24.013 *********** 2025-06-22 12:07:14.351074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 12:07:14.351093 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:07:14.351116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 12:07:14.351129 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.351140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 12:07:14.351159 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.351170 | orchestrator | 2025-06-22 12:07:14.351181 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-22 12:07:14.351192 | orchestrator | Sunday 22 June 2025 12:04:29 +0000 (0:00:02.543) 0:00:26.556 *********** 2025-06-22 12:07:14.351214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 12:07:14.351227 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:07:14.351239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 12:07:14.351258 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.351274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 12:07:14.351286 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.351297 | orchestrator | 2025-06-22 12:07:14.351308 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-06-22 12:07:14.351320 | orchestrator | Sunday 22 June 2025 12:04:31 +0000 (0:00:02.668) 0:00:29.224 *********** 2025-06-22 12:07:14.351339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 12:07:14.351363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 12:07:14.351386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 12:07:14.351399 | orchestrator | 2025-06-22 12:07:14.351410 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-06-22 12:07:14.351421 | orchestrator | Sunday 22 June 2025 12:04:34 +0000 (0:00:03.132) 0:00:32.357 *********** 2025-06-22 12:07:14.351432 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:07:14.351449 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:07:14.351460 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:07:14.351471 | orchestrator | 2025-06-22 12:07:14.351482 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-06-22 12:07:14.351493 | orchestrator | Sunday 22 June 2025 12:04:36 +0000 (0:00:01.158) 0:00:33.515 *********** 2025-06-22 12:07:14.351504 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:07:14.351516 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:07:14.351526 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:07:14.351538 | orchestrator | 2025-06-22 12:07:14.351549 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-06-22 12:07:14.351560 | orchestrator | Sunday 22 June 2025 12:04:36 +0000 (0:00:00.368) 0:00:33.884 *********** 2025-06-22 12:07:14.351571 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:07:14.351582 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:07:14.351592 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:07:14.351603 | orchestrator | 2025-06-22 12:07:14.351615 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-06-22 12:07:14.351626 | orchestrator | Sunday 22 June 2025 12:04:36 +0000 (0:00:00.389) 0:00:34.274 *********** 2025-06-22 12:07:14.351637 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-06-22 12:07:14.351648 | orchestrator | ...ignoring 2025-06-22 12:07:14.351659 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-06-22 12:07:14.351670 | orchestrator | ...ignoring 2025-06-22 12:07:14.351681 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-06-22 12:07:14.351693 | orchestrator | ...ignoring 2025-06-22 12:07:14.351703 | orchestrator | 2025-06-22 12:07:14.351715 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-06-22 12:07:14.351726 | orchestrator | Sunday 22 June 2025 12:04:47 +0000 (0:00:11.057) 0:00:45.331 *********** 2025-06-22 12:07:14.351737 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:07:14.351766 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:07:14.351777 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:07:14.351788 | orchestrator | 2025-06-22 12:07:14.351799 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-06-22 12:07:14.351810 | orchestrator | Sunday 22 June 2025 12:04:48 +0000 (0:00:00.775) 0:00:46.106 *********** 2025-06-22 12:07:14.351821 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:07:14.351832 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.351843 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.351854 | orchestrator | 2025-06-22 12:07:14.351865 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-06-22 12:07:14.351876 | orchestrator | Sunday 22 June 2025 12:04:49 +0000 (0:00:00.461) 0:00:46.568 *********** 2025-06-22 12:07:14.351887 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:07:14.351949 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.351963 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.351974 | orchestrator | 2025-06-22 12:07:14.351985 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-06-22 12:07:14.351996 | orchestrator | Sunday 22 June 2025 12:04:49 +0000 (0:00:00.433) 0:00:47.002 *********** 2025-06-22 12:07:14.352006 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:07:14.352017 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.352028 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.352093 | orchestrator | 2025-06-22 12:07:14.352105 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-06-22 12:07:14.352123 | orchestrator | Sunday 22 June 2025 12:04:50 +0000 (0:00:00.462) 0:00:47.464 *********** 2025-06-22 12:07:14.352135 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:07:14.352157 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:07:14.352168 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:07:14.352179 | orchestrator | 2025-06-22 12:07:14.352190 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-06-22 12:07:14.352201 | orchestrator | Sunday 22 June 2025 12:04:50 +0000 (0:00:00.762) 0:00:48.227 *********** 2025-06-22 12:07:14.352212 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:07:14.352223 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.352234 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.352244 | orchestrator | 2025-06-22 12:07:14.352255 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 12:07:14.352266 | orchestrator | Sunday 22 June 2025 12:04:51 +0000 (0:00:00.421) 0:00:48.648 *********** 2025-06-22 12:07:14.352277 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.352287 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.352298 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-06-22 12:07:14.352309 | orchestrator | 2025-06-22 12:07:14.352320 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-06-22 12:07:14.352331 | orchestrator | Sunday 22 June 2025 12:04:51 +0000 (0:00:00.391) 0:00:49.040 *********** 2025-06-22 12:07:14.352342 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:07:14.352352 | orchestrator | 2025-06-22 12:07:14.352363 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-06-22 12:07:14.352374 | orchestrator | Sunday 22 June 2025 12:05:02 +0000 (0:00:10.595) 0:00:59.635 *********** 2025-06-22 12:07:14.352385 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:07:14.352396 | orchestrator | 2025-06-22 12:07:14.352407 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 12:07:14.352417 | orchestrator | Sunday 22 June 2025 12:05:02 +0000 (0:00:00.136) 0:00:59.771 *********** 2025-06-22 12:07:14.352428 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:07:14.352439 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.352450 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.352461 | orchestrator | 2025-06-22 12:07:14.352471 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-06-22 12:07:14.352482 | orchestrator | Sunday 22 June 2025 12:05:03 +0000 (0:00:01.125) 0:01:00.897 *********** 2025-06-22 12:07:14.352493 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:07:14.352504 | orchestrator | 2025-06-22 12:07:14.352514 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-06-22 12:07:14.352525 | orchestrator | Sunday 22 June 2025 12:05:11 +0000 (0:00:08.197) 0:01:09.095 *********** 2025-06-22 12:07:14.352536 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:07:14.352547 | orchestrator | 2025-06-22 12:07:14.352558 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-06-22 12:07:14.352569 | orchestrator | Sunday 22 June 2025 12:05:13 +0000 (0:00:01.589) 0:01:10.684 *********** 2025-06-22 12:07:14.352580 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:07:14.352590 | orchestrator | 2025-06-22 12:07:14.352601 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-06-22 12:07:14.352612 | orchestrator | Sunday 22 June 2025 12:05:15 +0000 (0:00:02.563) 0:01:13.248 *********** 2025-06-22 12:07:14.352623 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:07:14.352633 | orchestrator | 2025-06-22 12:07:14.352644 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-06-22 12:07:14.352655 | orchestrator | Sunday 22 June 2025 12:05:15 +0000 (0:00:00.132) 0:01:13.380 *********** 2025-06-22 12:07:14.352666 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:07:14.352677 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.352687 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.352698 | orchestrator | 2025-06-22 12:07:14.352709 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-06-22 12:07:14.352720 | orchestrator | Sunday 22 June 2025 12:05:16 +0000 (0:00:00.506) 0:01:13.887 *********** 2025-06-22 12:07:14.352737 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:07:14.352748 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-22 12:07:14.352758 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:07:14.352769 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:07:14.352780 | orchestrator | 2025-06-22 12:07:14.352791 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-22 12:07:14.352802 | orchestrator | skipping: no hosts matched 2025-06-22 12:07:14.352812 | orchestrator | 2025-06-22 12:07:14.352823 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-22 12:07:14.352834 | orchestrator | 2025-06-22 12:07:14.352851 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-22 12:07:14.352862 | orchestrator | Sunday 22 June 2025 12:05:16 +0000 (0:00:00.329) 0:01:14.217 *********** 2025-06-22 12:07:14.352872 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:07:14.352883 | orchestrator | 2025-06-22 12:07:14.352894 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-22 12:07:14.352905 | orchestrator | Sunday 22 June 2025 12:05:42 +0000 (0:00:25.671) 0:01:39.888 *********** 2025-06-22 12:07:14.352916 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:07:14.352927 | orchestrator | 2025-06-22 12:07:14.352937 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-22 12:07:14.352948 | orchestrator | Sunday 22 June 2025 12:05:58 +0000 (0:00:15.586) 0:01:55.474 *********** 2025-06-22 12:07:14.352959 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:07:14.352970 | orchestrator | 2025-06-22 12:07:14.352980 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-22 12:07:14.352991 | orchestrator | 2025-06-22 12:07:14.353002 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-22 12:07:14.353013 | orchestrator | Sunday 22 June 2025 12:06:00 +0000 (0:00:02.808) 0:01:58.283 *********** 2025-06-22 12:07:14.353024 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:07:14.353034 | orchestrator | 2025-06-22 12:07:14.353093 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-22 12:07:14.353111 | orchestrator | Sunday 22 June 2025 12:06:18 +0000 (0:00:17.497) 0:02:15.780 *********** 2025-06-22 12:07:14.353122 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:07:14.353133 | orchestrator | 2025-06-22 12:07:14.353144 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-22 12:07:14.353155 | orchestrator | Sunday 22 June 2025 12:06:39 +0000 (0:00:20.666) 0:02:36.446 *********** 2025-06-22 12:07:14.353166 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:07:14.353176 | orchestrator | 2025-06-22 12:07:14.353187 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-22 12:07:14.353198 | orchestrator | 2025-06-22 12:07:14.353209 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-22 12:07:14.353220 | orchestrator | Sunday 22 June 2025 12:06:41 +0000 (0:00:02.892) 0:02:39.339 *********** 2025-06-22 12:07:14.353231 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:07:14.353241 | orchestrator | 2025-06-22 12:07:14.353252 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-22 12:07:14.353263 | orchestrator | Sunday 22 June 2025 12:06:53 +0000 (0:00:11.515) 0:02:50.855 *********** 2025-06-22 12:07:14.353274 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:07:14.353285 | orchestrator | 2025-06-22 12:07:14.353296 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-22 12:07:14.353307 | orchestrator | Sunday 22 June 2025 12:06:58 +0000 (0:00:04.641) 0:02:55.497 *********** 2025-06-22 12:07:14.353318 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:07:14.353328 | orchestrator | 2025-06-22 12:07:14.353339 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-22 12:07:14.353350 | orchestrator | 2025-06-22 12:07:14.353361 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-22 12:07:14.353372 | orchestrator | Sunday 22 June 2025 12:07:00 +0000 (0:00:02.330) 0:02:57.827 *********** 2025-06-22 12:07:14.353391 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:07:14.353402 | orchestrator | 2025-06-22 12:07:14.353413 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-06-22 12:07:14.353424 | orchestrator | Sunday 22 June 2025 12:07:00 +0000 (0:00:00.516) 0:02:58.344 *********** 2025-06-22 12:07:14.353435 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.353446 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.353457 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:07:14.353467 | orchestrator | 2025-06-22 12:07:14.353478 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-06-22 12:07:14.353489 | orchestrator | Sunday 22 June 2025 12:07:03 +0000 (0:00:02.523) 0:03:00.867 *********** 2025-06-22 12:07:14.353500 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.353511 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.353521 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:07:14.353532 | orchestrator | 2025-06-22 12:07:14.353543 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-06-22 12:07:14.353553 | orchestrator | Sunday 22 June 2025 12:07:05 +0000 (0:00:02.279) 0:03:03.146 *********** 2025-06-22 12:07:14.353563 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.353573 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.353582 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:07:14.353592 | orchestrator | 2025-06-22 12:07:14.353602 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-06-22 12:07:14.353611 | orchestrator | Sunday 22 June 2025 12:07:07 +0000 (0:00:02.176) 0:03:05.323 *********** 2025-06-22 12:07:14.353621 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.353631 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.353640 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:07:14.353650 | orchestrator | 2025-06-22 12:07:14.353660 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-06-22 12:07:14.353669 | orchestrator | Sunday 22 June 2025 12:07:10 +0000 (0:00:02.285) 0:03:07.608 *********** 2025-06-22 12:07:14.353679 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:07:14.353689 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:07:14.353698 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:07:14.353708 | orchestrator | 2025-06-22 12:07:14.353718 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-22 12:07:14.353727 | orchestrator | Sunday 22 June 2025 12:07:13 +0000 (0:00:03.011) 0:03:10.619 *********** 2025-06-22 12:07:14.353737 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:07:14.353747 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:07:14.353756 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:07:14.353766 | orchestrator | 2025-06-22 12:07:14.353775 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:07:14.353790 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-22 12:07:14.353800 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-06-22 12:07:14.353811 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-22 12:07:14.353820 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-22 12:07:14.353830 | orchestrator | 2025-06-22 12:07:14.353840 | orchestrator | 2025-06-22 12:07:14.353850 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:07:14.353860 | orchestrator | Sunday 22 June 2025 12:07:13 +0000 (0:00:00.237) 0:03:10.857 *********** 2025-06-22 12:07:14.353869 | orchestrator | =============================================================================== 2025-06-22 12:07:14.353885 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 43.17s 2025-06-22 12:07:14.353895 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.25s 2025-06-22 12:07:14.353909 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.52s 2025-06-22 12:07:14.353919 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.06s 2025-06-22 12:07:14.353929 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.60s 2025-06-22 12:07:14.353939 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.20s 2025-06-22 12:07:14.353948 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.70s 2025-06-22 12:07:14.353958 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.64s 2025-06-22 12:07:14.353968 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.60s 2025-06-22 12:07:14.353977 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.86s 2025-06-22 12:07:14.353987 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.69s 2025-06-22 12:07:14.353996 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.13s 2025-06-22 12:07:14.354006 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.01s 2025-06-22 12:07:14.354062 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.93s 2025-06-22 12:07:14.354073 | orchestrator | Check MariaDB service --------------------------------------------------- 2.85s 2025-06-22 12:07:14.354083 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.67s 2025-06-22 12:07:14.354093 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.56s 2025-06-22 12:07:14.354102 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.54s 2025-06-22 12:07:14.354112 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.52s 2025-06-22 12:07:14.354122 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.33s 2025-06-22 12:07:14.354131 | orchestrator | 2025-06-22 12:07:14 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:07:14.354141 | orchestrator | 2025-06-22 12:07:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:07:17.410946 | orchestrator | 2025-06-22 12:07:17 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:07:17.412497 | orchestrator | 2025-06-22 12:07:17 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:07:17.414922 | orchestrator | 2025-06-22 12:07:17 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:07:17.415014 | orchestrator | 2025-06-22 12:07:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:07:20.459596 | orchestrator | 2025-06-22 12:07:20 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:07:20.464031 | orchestrator | 2025-06-22 12:07:20 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:07:20.464723 | orchestrator | 2025-06-22 12:07:20 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:07:20.467329 | orchestrator | 2025-06-22 12:07:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:07:23.510756 | orchestrator | 2025-06-22 12:07:23 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:07:23.511660 | orchestrator | 2025-06-22 12:07:23 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:07:23.512764 | orchestrator | 2025-06-22 12:07:23 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:07:23.512826 | orchestrator | 2025-06-22 12:07:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:07:26.547902 | orchestrator | 2025-06-22 12:07:26 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:07:26.548224 | orchestrator | 2025-06-22 12:07:26 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:07:26.548936 | orchestrator | 2025-06-22 12:07:26 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:07:26.548961 | orchestrator | 2025-06-22 12:07:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:07:29.593950 | orchestrator | 2025-06-22 12:07:29 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:07:29.594405 | orchestrator | 2025-06-22 12:07:29 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:07:29.595551 | orchestrator | 2025-06-22 12:07:29 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:07:29.595662 | orchestrator | 2025-06-22 12:07:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:07:32.645949 | orchestrator | 2025-06-22 12:07:32 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:07:32.646490 | orchestrator | 2025-06-22 12:07:32 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:07:32.647750 | orchestrator | 2025-06-22 12:07:32 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:07:32.647774 | orchestrator | 2025-06-22 12:07:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:07:35.687713 | orchestrator | 2025-06-22 12:07:35 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:07:35.688162 | orchestrator | 2025-06-22 12:07:35 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:07:35.688881 | orchestrator | 2025-06-22 12:07:35 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:07:35.689140 | orchestrator | 2025-06-22 12:07:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:07:38.744311 | orchestrator | 2025-06-22 12:07:38 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:07:38.745562 | orchestrator | 2025-06-22 12:07:38 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:07:38.747367 | orchestrator | 2025-06-22 12:07:38 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:07:38.747623 | orchestrator | 2025-06-22 12:07:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:07:41.788329 | orchestrator | 2025-06-22 12:07:41 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:07:41.788475 | orchestrator | 2025-06-22 12:07:41 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:07:41.788492 | orchestrator | 2025-06-22 12:07:41 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:07:41.789282 | orchestrator | 2025-06-22 12:07:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:07:44.836795 | orchestrator | 2025-06-22 12:07:44 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:07:44.838571 | orchestrator | 2025-06-22 12:07:44 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:07:44.841158 | orchestrator | 2025-06-22 12:07:44 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:07:44.841450 | orchestrator | 2025-06-22 12:07:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:07:47.898749 | orchestrator | 2025-06-22 12:07:47 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:07:47.900623 | orchestrator | 2025-06-22 12:07:47 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:07:47.904145 | orchestrator | 2025-06-22 12:07:47 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:07:47.904183 | orchestrator | 2025-06-22 12:07:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:07:50.956742 | orchestrator | 2025-06-22 12:07:50 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:07:50.956928 | orchestrator | 2025-06-22 12:07:50 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:07:50.956960 | orchestrator | 2025-06-22 12:07:50 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:07:50.956973 | orchestrator | 2025-06-22 12:07:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:07:53.999318 | orchestrator | 2025-06-22 12:07:53 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:07:54.002168 | orchestrator | 2025-06-22 12:07:54 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:07:54.005326 | orchestrator | 2025-06-22 12:07:54 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:07:54.005435 | orchestrator | 2025-06-22 12:07:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:07:57.059860 | orchestrator | 2025-06-22 12:07:57 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:07:57.060597 | orchestrator | 2025-06-22 12:07:57 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:07:57.061302 | orchestrator | 2025-06-22 12:07:57 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:07:57.061326 | orchestrator | 2025-06-22 12:07:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:00.104242 | orchestrator | 2025-06-22 12:08:00 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:00.105253 | orchestrator | 2025-06-22 12:08:00 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:00.107508 | orchestrator | 2025-06-22 12:08:00 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:08:00.107545 | orchestrator | 2025-06-22 12:08:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:03.161052 | orchestrator | 2025-06-22 12:08:03 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:03.161977 | orchestrator | 2025-06-22 12:08:03 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:03.164320 | orchestrator | 2025-06-22 12:08:03 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:08:03.164844 | orchestrator | 2025-06-22 12:08:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:06.224399 | orchestrator | 2025-06-22 12:08:06 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:06.224513 | orchestrator | 2025-06-22 12:08:06 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:06.224969 | orchestrator | 2025-06-22 12:08:06 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:08:06.225195 | orchestrator | 2025-06-22 12:08:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:09.277846 | orchestrator | 2025-06-22 12:08:09 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:09.280500 | orchestrator | 2025-06-22 12:08:09 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:09.282818 | orchestrator | 2025-06-22 12:08:09 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:08:09.282873 | orchestrator | 2025-06-22 12:08:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:12.323456 | orchestrator | 2025-06-22 12:08:12 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:12.323716 | orchestrator | 2025-06-22 12:08:12 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:12.324818 | orchestrator | 2025-06-22 12:08:12 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:08:12.324840 | orchestrator | 2025-06-22 12:08:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:15.380164 | orchestrator | 2025-06-22 12:08:15 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:15.382831 | orchestrator | 2025-06-22 12:08:15 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:15.384795 | orchestrator | 2025-06-22 12:08:15 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:08:15.385116 | orchestrator | 2025-06-22 12:08:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:18.428837 | orchestrator | 2025-06-22 12:08:18 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:18.431351 | orchestrator | 2025-06-22 12:08:18 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:18.433654 | orchestrator | 2025-06-22 12:08:18 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:08:18.433749 | orchestrator | 2025-06-22 12:08:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:21.484149 | orchestrator | 2025-06-22 12:08:21 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:21.485159 | orchestrator | 2025-06-22 12:08:21 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:21.486550 | orchestrator | 2025-06-22 12:08:21 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state STARTED 2025-06-22 12:08:21.486640 | orchestrator | 2025-06-22 12:08:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:24.538444 | orchestrator | 2025-06-22 12:08:24 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:24.542964 | orchestrator | 2025-06-22 12:08:24 | INFO  | Task 744f22dd-a594-4bb8-8db2-028635ed03f0 is in state STARTED 2025-06-22 12:08:24.543967 | orchestrator | 2025-06-22 12:08:24 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:24.545852 | orchestrator | 2025-06-22 12:08:24 | INFO  | Task 058f01a4-c45b-41f6-bab7-96bbecc378f7 is in state SUCCESS 2025-06-22 12:08:24.547886 | orchestrator | 2025-06-22 12:08:24.547917 | orchestrator | 2025-06-22 12:08:24.547928 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-06-22 12:08:24.547939 | orchestrator | 2025-06-22 12:08:24.547950 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-22 12:08:24.547960 | orchestrator | Sunday 22 June 2025 12:06:14 +0000 (0:00:00.430) 0:00:00.430 *********** 2025-06-22 12:08:24.547970 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:08:24.547982 | orchestrator | 2025-06-22 12:08:24.547992 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-22 12:08:24.548038 | orchestrator | Sunday 22 June 2025 12:06:15 +0000 (0:00:00.458) 0:00:00.888 *********** 2025-06-22 12:08:24.548049 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:08:24.548108 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:08:24.548118 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:08:24.548128 | orchestrator | 2025-06-22 12:08:24.548137 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-22 12:08:24.548254 | orchestrator | Sunday 22 June 2025 12:06:16 +0000 (0:00:00.697) 0:00:01.586 *********** 2025-06-22 12:08:24.548269 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:08:24.548279 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:08:24.548289 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:08:24.548298 | orchestrator | 2025-06-22 12:08:24.548308 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-22 12:08:24.548318 | orchestrator | Sunday 22 June 2025 12:06:16 +0000 (0:00:00.299) 0:00:01.886 *********** 2025-06-22 12:08:24.548327 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:08:24.548337 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:08:24.548346 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:08:24.548356 | orchestrator | 2025-06-22 12:08:24.548365 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-22 12:08:24.548375 | orchestrator | Sunday 22 June 2025 12:06:17 +0000 (0:00:00.725) 0:00:02.611 *********** 2025-06-22 12:08:24.548385 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:08:24.548394 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:08:24.548404 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:08:24.548413 | orchestrator | 2025-06-22 12:08:24.548423 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-22 12:08:24.548432 | orchestrator | Sunday 22 June 2025 12:06:17 +0000 (0:00:00.273) 0:00:02.884 *********** 2025-06-22 12:08:24.548442 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:08:24.548451 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:08:24.548461 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:08:24.548470 | orchestrator | 2025-06-22 12:08:24.548480 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-22 12:08:24.548489 | orchestrator | Sunday 22 June 2025 12:06:17 +0000 (0:00:00.250) 0:00:03.135 *********** 2025-06-22 12:08:24.548499 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:08:24.548508 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:08:24.548517 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:08:24.548527 | orchestrator | 2025-06-22 12:08:24.548537 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-22 12:08:24.548546 | orchestrator | Sunday 22 June 2025 12:06:17 +0000 (0:00:00.265) 0:00:03.400 *********** 2025-06-22 12:08:24.548556 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.548939 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.548956 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.548966 | orchestrator | 2025-06-22 12:08:24.548977 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-22 12:08:24.548986 | orchestrator | Sunday 22 June 2025 12:06:18 +0000 (0:00:00.414) 0:00:03.815 *********** 2025-06-22 12:08:24.548996 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:08:24.549005 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:08:24.549015 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:08:24.549025 | orchestrator | 2025-06-22 12:08:24.549034 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-22 12:08:24.549044 | orchestrator | Sunday 22 June 2025 12:06:18 +0000 (0:00:00.244) 0:00:04.060 *********** 2025-06-22 12:08:24.549106 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 12:08:24.549117 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 12:08:24.549127 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 12:08:24.549136 | orchestrator | 2025-06-22 12:08:24.549146 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-22 12:08:24.549168 | orchestrator | Sunday 22 June 2025 12:06:19 +0000 (0:00:00.593) 0:00:04.653 *********** 2025-06-22 12:08:24.549178 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:08:24.549188 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:08:24.549197 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:08:24.549206 | orchestrator | 2025-06-22 12:08:24.549216 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-22 12:08:24.549225 | orchestrator | Sunday 22 June 2025 12:06:19 +0000 (0:00:00.403) 0:00:05.057 *********** 2025-06-22 12:08:24.549235 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 12:08:24.549244 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 12:08:24.549254 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 12:08:24.549264 | orchestrator | 2025-06-22 12:08:24.549273 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-22 12:08:24.549283 | orchestrator | Sunday 22 June 2025 12:06:21 +0000 (0:00:02.096) 0:00:07.153 *********** 2025-06-22 12:08:24.549292 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 12:08:24.549303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 12:08:24.549312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 12:08:24.549322 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.549331 | orchestrator | 2025-06-22 12:08:24.549341 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-22 12:08:24.549361 | orchestrator | Sunday 22 June 2025 12:06:22 +0000 (0:00:00.416) 0:00:07.570 *********** 2025-06-22 12:08:24.549373 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.549670 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.549690 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.549700 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.549710 | orchestrator | 2025-06-22 12:08:24.549719 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-22 12:08:24.549729 | orchestrator | Sunday 22 June 2025 12:06:22 +0000 (0:00:00.784) 0:00:08.354 *********** 2025-06-22 12:08:24.549798 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.549815 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.549825 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.549864 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.549875 | orchestrator | 2025-06-22 12:08:24.549884 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-22 12:08:24.549894 | orchestrator | Sunday 22 June 2025 12:06:23 +0000 (0:00:00.157) 0:00:08.512 *********** 2025-06-22 12:08:24.549906 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'bc12daefa5cc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-22 12:06:20.275312', 'end': '2025-06-22 12:06:20.313340', 'delta': '0:00:00.038028', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bc12daefa5cc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-06-22 12:08:24.549925 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '025e7b2839dd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-22 12:06:21.015096', 'end': '2025-06-22 12:06:21.049514', 'delta': '0:00:00.034418', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['025e7b2839dd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-06-22 12:08:24.549988 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c9385fc5c0a0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-22 12:06:21.535196', 'end': '2025-06-22 12:06:21.571448', 'delta': '0:00:00.036252', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c9385fc5c0a0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-06-22 12:08:24.550002 | orchestrator | 2025-06-22 12:08:24.550012 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-22 12:08:24.550104 | orchestrator | Sunday 22 June 2025 12:06:23 +0000 (0:00:00.365) 0:00:08.877 *********** 2025-06-22 12:08:24.550115 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:08:24.550124 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:08:24.550134 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:08:24.550144 | orchestrator | 2025-06-22 12:08:24.550154 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-22 12:08:24.550163 | orchestrator | Sunday 22 June 2025 12:06:23 +0000 (0:00:00.438) 0:00:09.316 *********** 2025-06-22 12:08:24.550172 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-06-22 12:08:24.550182 | orchestrator | 2025-06-22 12:08:24.550191 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-22 12:08:24.550201 | orchestrator | Sunday 22 June 2025 12:06:25 +0000 (0:00:01.690) 0:00:11.006 *********** 2025-06-22 12:08:24.550210 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.550220 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.550229 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.550249 | orchestrator | 2025-06-22 12:08:24.550259 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-22 12:08:24.550269 | orchestrator | Sunday 22 June 2025 12:06:25 +0000 (0:00:00.285) 0:00:11.291 *********** 2025-06-22 12:08:24.550279 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.550291 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.550301 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.550312 | orchestrator | 2025-06-22 12:08:24.550323 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-22 12:08:24.550334 | orchestrator | Sunday 22 June 2025 12:06:26 +0000 (0:00:00.391) 0:00:11.682 *********** 2025-06-22 12:08:24.550345 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.550356 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.550367 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.550378 | orchestrator | 2025-06-22 12:08:24.550389 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-22 12:08:24.550400 | orchestrator | Sunday 22 June 2025 12:06:26 +0000 (0:00:00.476) 0:00:12.159 *********** 2025-06-22 12:08:24.550411 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:08:24.550422 | orchestrator | 2025-06-22 12:08:24.550433 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-22 12:08:24.550444 | orchestrator | Sunday 22 June 2025 12:06:26 +0000 (0:00:00.133) 0:00:12.292 *********** 2025-06-22 12:08:24.550455 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.550466 | orchestrator | 2025-06-22 12:08:24.550477 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-22 12:08:24.550489 | orchestrator | Sunday 22 June 2025 12:06:27 +0000 (0:00:00.210) 0:00:12.503 *********** 2025-06-22 12:08:24.550499 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.550511 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.550521 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.550532 | orchestrator | 2025-06-22 12:08:24.550544 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-22 12:08:24.550555 | orchestrator | Sunday 22 June 2025 12:06:27 +0000 (0:00:00.288) 0:00:12.792 *********** 2025-06-22 12:08:24.550567 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.550578 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.550589 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.550599 | orchestrator | 2025-06-22 12:08:24.550610 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-22 12:08:24.550622 | orchestrator | Sunday 22 June 2025 12:06:27 +0000 (0:00:00.324) 0:00:13.116 *********** 2025-06-22 12:08:24.550633 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.550644 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.550654 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.550664 | orchestrator | 2025-06-22 12:08:24.550673 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-22 12:08:24.550688 | orchestrator | Sunday 22 June 2025 12:06:28 +0000 (0:00:00.540) 0:00:13.656 *********** 2025-06-22 12:08:24.550698 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.550708 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.550717 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.550726 | orchestrator | 2025-06-22 12:08:24.550736 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-22 12:08:24.550745 | orchestrator | Sunday 22 June 2025 12:06:28 +0000 (0:00:00.324) 0:00:13.981 *********** 2025-06-22 12:08:24.550755 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.550764 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.550773 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.550783 | orchestrator | 2025-06-22 12:08:24.550792 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-22 12:08:24.550802 | orchestrator | Sunday 22 June 2025 12:06:28 +0000 (0:00:00.332) 0:00:14.314 *********** 2025-06-22 12:08:24.550811 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.550836 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.550846 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.550856 | orchestrator | 2025-06-22 12:08:24.550865 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-22 12:08:24.550905 | orchestrator | Sunday 22 June 2025 12:06:29 +0000 (0:00:00.330) 0:00:14.644 *********** 2025-06-22 12:08:24.550916 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.550926 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.550936 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.550945 | orchestrator | 2025-06-22 12:08:24.550955 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-22 12:08:24.550964 | orchestrator | Sunday 22 June 2025 12:06:29 +0000 (0:00:00.560) 0:00:15.205 *********** 2025-06-22 12:08:24.550975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6ffadd37--6b10--5a4f--8f0b--2da52ae5008f-osd--block--6ffadd37--6b10--5a4f--8f0b--2da52ae5008f', 'dm-uuid-LVM-5z2kMErXdzqhz6sGodEbou1xMVtAcvKqvPv92Sa4BaDuu3K61FJbBQLqXSUrKRT2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.550986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b51a6ec--8722--57c7--ad6b--56758d62ede6-osd--block--0b51a6ec--8722--57c7--ad6b--56758d62ede6', 'dm-uuid-LVM-DmuDx4q0eg9c7S39c7306HiSMFddoeKvpLAa0XFHzC1czDgajcKZPlc2LLeS5Lax'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.550997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part1', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part14', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part15', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part16', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:08:24.551158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6ffadd37--6b10--5a4f--8f0b--2da52ae5008f-osd--block--6ffadd37--6b10--5a4f--8f0b--2da52ae5008f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GkFboE-SZ5j-PxRK-4llI-c7Kk-Tsfz-iL7GcI', 'scsi-0QEMU_QEMU_HARDDISK_4b47f8cd-db2a-4bea-898d-3d48c49a84c2', 'scsi-SQEMU_QEMU_HARDDISK_4b47f8cd-db2a-4bea-898d-3d48c49a84c2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:08:24.551200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0b51a6ec--8722--57c7--ad6b--56758d62ede6-osd--block--0b51a6ec--8722--57c7--ad6b--56758d62ede6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D4mnQ5-hyef-SylW-6zGM-naDV-MTqz-6xfjrr', 'scsi-0QEMU_QEMU_HARDDISK_7610229b-d7bf-450f-9964-1d42e936a357', 'scsi-SQEMU_QEMU_HARDDISK_7610229b-d7bf-450f-9964-1d42e936a357'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:08:24.551212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c288123e-75d1-4d08-8561-55f7fbbd7c1b', 'scsi-SQEMU_QEMU_HARDDISK_c288123e-75d1-4d08-8561-55f7fbbd7c1b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:08:24.551224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d90edff2--979c--5e5e--98e2--f02394d35fb4-osd--block--d90edff2--979c--5e5e--98e2--f02394d35fb4', 'dm-uuid-LVM-x1R5ovTZjx0BSQusAolddecCSdxeaymHnPe0JqCYsauL0BU6MCAzn1rRsQCc2u3m'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-11-16-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:08:24.551245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9de1692c--afc0--5cdb--8a59--e564d6a096fc-osd--block--9de1692c--afc0--5cdb--8a59--e564d6a096fc', 'dm-uuid-LVM-ag0D9SVtA7CjPrJ09lGiURgPqhP0rrh81ZbsSXtAxPezQngHKhcacKIqTafegz5S'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551281 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551354 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part1', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part14', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part15', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part16', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:08:24.551405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d90edff2--979c--5e5e--98e2--f02394d35fb4-osd--block--d90edff2--979c--5e5e--98e2--f02394d35fb4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XIFEhX-1znK-U8Q8-sqVA-7VrO-xW09-7AWieS', 'scsi-0QEMU_QEMU_HARDDISK_95ca9be4-ae4c-4603-a11a-c98b5f55b273', 'scsi-SQEMU_QEMU_HARDDISK_95ca9be4-ae4c-4603-a11a-c98b5f55b273'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:08:24.551416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9de1692c--afc0--5cdb--8a59--e564d6a096fc-osd--block--9de1692c--afc0--5cdb--8a59--e564d6a096fc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wzx2v6-UW7k-0R9Q-SHgb-JNbF-gFii-bsHINm', 'scsi-0QEMU_QEMU_HARDDISK_899f0377-b87c-421a-9d44-3bd393f5c125', 'scsi-SQEMU_QEMU_HARDDISK_899f0377-b87c-421a-9d44-3bd393f5c125'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:08:24.551426 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.551436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_060f7999-6812-4095-99a7-aa228581a5cf', 'scsi-SQEMU_QEMU_HARDDISK_060f7999-6812-4095-99a7-aa228581a5cf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:08:24.551447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-11-16-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:08:24.551464 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.551478 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a4028de--648e--5a19--94a5--5dc0f00dede1-osd--block--8a4028de--648e--5a19--94a5--5dc0f00dede1', 'dm-uuid-LVM-acpfe85L5vZuA4u1jglxT9JbzXosiaumIcUM3C65UsG6SE4zjN6U4I4NdDNSd1lJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1d622d46--9f3b--5fb0--a039--cce126484330-osd--block--1d622d46--9f3b--5fb0--a039--cce126484330', 'dm-uuid-LVM-XLo2EjF3JS9KQI13FFIVCJ739Xx6PhXo2Ft1rKtLd9VZcEz84kQEE9xSFtmu9pZd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551575 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 12:08:24.551608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part1', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part14', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part15', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part16', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:08:24.551620 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8a4028de--648e--5a19--94a5--5dc0f00dede1-osd--block--8a4028de--648e--5a19--94a5--5dc0f00dede1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fOb4E1-Vi2m-r6V2-FMAw-MsLk-xAmo-CWZQUj', 'scsi-0QEMU_QEMU_HARDDISK_0234f42c-6d02-44b8-b796-e801f7c6659f', 'scsi-SQEMU_QEMU_HARDDISK_0234f42c-6d02-44b8-b796-e801f7c6659f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:08:24.551631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1d622d46--9f3b--5fb0--a039--cce126484330-osd--block--1d622d46--9f3b--5fb0--a039--cce126484330'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7WG3V5-C9T6-AL8i-e53y-TJpC-FvQt-dmIRlR', 'scsi-0QEMU_QEMU_HARDDISK_a273c01c-52c4-42f8-a181-d91a87ff3a5e', 'scsi-SQEMU_QEMU_HARDDISK_a273c01c-52c4-42f8-a181-d91a87ff3a5e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:08:24.551653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a129606c-fab1-48ed-9350-9d2eafddbd52', 'scsi-SQEMU_QEMU_HARDDISK_a129606c-fab1-48ed-9350-9d2eafddbd52'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:08:24.551669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-11-16-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 12:08:24.551679 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.551689 | orchestrator | 2025-06-22 12:08:24.551699 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-22 12:08:24.551709 | orchestrator | Sunday 22 June 2025 12:06:30 +0000 (0:00:00.616) 0:00:15.822 *********** 2025-06-22 12:08:24.551720 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6ffadd37--6b10--5a4f--8f0b--2da52ae5008f-osd--block--6ffadd37--6b10--5a4f--8f0b--2da52ae5008f', 'dm-uuid-LVM-5z2kMErXdzqhz6sGodEbou1xMVtAcvKqvPv92Sa4BaDuu3K61FJbBQLqXSUrKRT2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.551731 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b51a6ec--8722--57c7--ad6b--56758d62ede6-osd--block--0b51a6ec--8722--57c7--ad6b--56758d62ede6', 'dm-uuid-LVM-DmuDx4q0eg9c7S39c7306HiSMFddoeKvpLAa0XFHzC1czDgajcKZPlc2LLeS5Lax'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.551741 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.551761 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.551777 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.551795 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.551805 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.551815 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.551825 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.551842 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.551868 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part1', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part14', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part15', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part16', 'scsi-SQEMU_QEMU_HARDDISK_7ea9cfe9-e584-4538-969c-cb61cccf4b41-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.551880 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d90edff2--979c--5e5e--98e2--f02394d35fb4-osd--block--d90edff2--979c--5e5e--98e2--f02394d35fb4', 'dm-uuid-LVM-x1R5ovTZjx0BSQusAolddecCSdxeaymHnPe0JqCYsauL0BU6MCAzn1rRsQCc2u3m'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.551891 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6ffadd37--6b10--5a4f--8f0b--2da52ae5008f-osd--block--6ffadd37--6b10--5a4f--8f0b--2da52ae5008f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GkFboE-SZ5j-PxRK-4llI-c7Kk-Tsfz-iL7GcI', 'scsi-0QEMU_QEMU_HARDDISK_4b47f8cd-db2a-4bea-898d-3d48c49a84c2', 'scsi-SQEMU_QEMU_HARDDISK_4b47f8cd-db2a-4bea-898d-3d48c49a84c2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.551912 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9de1692c--afc0--5cdb--8a59--e564d6a096fc-osd--block--9de1692c--afc0--5cdb--8a59--e564d6a096fc', 'dm-uuid-LVM-ag0D9SVtA7CjPrJ09lGiURgPqhP0rrh81ZbsSXtAxPezQngHKhcacKIqTafegz5S'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.551929 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0b51a6ec--8722--57c7--ad6b--56758d62ede6-osd--block--0b51a6ec--8722--57c7--ad6b--56758d62ede6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D4mnQ5-hyef-SylW-6zGM-naDV-MTqz-6xfjrr', 'scsi-0QEMU_QEMU_HARDDISK_7610229b-d7bf-450f-9964-1d42e936a357', 'scsi-SQEMU_QEMU_HARDDISK_7610229b-d7bf-450f-9964-1d42e936a357'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.551940 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.551950 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c288123e-75d1-4d08-8561-55f7fbbd7c1b', 'scsi-SQEMU_QEMU_HARDDISK_c288123e-75d1-4d08-8561-55f7fbbd7c1b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.551967 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.551977 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-11-16-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.551992 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552008 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552018 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.552028 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552038 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552080 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552091 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552114 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part1', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part14', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part15', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part16', 'scsi-SQEMU_QEMU_HARDDISK_939cb3b2-f470-4f15-9cd5-5f32e96d8a48-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552126 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d90edff2--979c--5e5e--98e2--f02394d35fb4-osd--block--d90edff2--979c--5e5e--98e2--f02394d35fb4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XIFEhX-1znK-U8Q8-sqVA-7VrO-xW09-7AWieS', 'scsi-0QEMU_QEMU_HARDDISK_95ca9be4-ae4c-4603-a11a-c98b5f55b273', 'scsi-SQEMU_QEMU_HARDDISK_95ca9be4-ae4c-4603-a11a-c98b5f55b273'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552146 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9de1692c--afc0--5cdb--8a59--e564d6a096fc-osd--block--9de1692c--afc0--5cdb--8a59--e564d6a096fc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wzx2v6-UW7k-0R9Q-SHgb-JNbF-gFii-bsHINm', 'scsi-0QEMU_QEMU_HARDDISK_899f0377-b87c-421a-9d44-3bd393f5c125', 'scsi-SQEMU_QEMU_HARDDISK_899f0377-b87c-421a-9d44-3bd393f5c125'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552161 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_060f7999-6812-4095-99a7-aa228581a5cf', 'scsi-SQEMU_QEMU_HARDDISK_060f7999-6812-4095-99a7-aa228581a5cf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552178 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a4028de--648e--5a19--94a5--5dc0f00dede1-osd--block--8a4028de--648e--5a19--94a5--5dc0f00dede1', 'dm-uuid-LVM-acpfe85L5vZuA4u1jglxT9JbzXosiaumIcUM3C65UsG6SE4zjN6U4I4NdDNSd1lJ'2025-06-22 12:08:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:24.552190 | orchestrator | ], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552202 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-11-16-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552219 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.552229 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1d622d46--9f3b--5fb0--a039--cce126484330-osd--block--1d622d46--9f3b--5fb0--a039--cce126484330', 'dm-uuid-LVM-XLo2EjF3JS9KQI13FFIVCJ739Xx6PhXo2Ft1rKtLd9VZcEz84kQEE9xSFtmu9pZd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552239 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552249 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552264 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552279 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552290 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552306 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552317 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552327 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552352 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part1', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part14', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part15', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part16', 'scsi-SQEMU_QEMU_HARDDISK_78f6eb13-c64f-4a4d-8d42-a1e1157c4033-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552370 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8a4028de--648e--5a19--94a5--5dc0f00dede1-osd--block--8a4028de--648e--5a19--94a5--5dc0f00dede1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fOb4E1-Vi2m-r6V2-FMAw-MsLk-xAmo-CWZQUj', 'scsi-0QEMU_QEMU_HARDDISK_0234f42c-6d02-44b8-b796-e801f7c6659f', 'scsi-SQEMU_QEMU_HARDDISK_0234f42c-6d02-44b8-b796-e801f7c6659f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552381 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1d622d46--9f3b--5fb0--a039--cce126484330-osd--block--1d622d46--9f3b--5fb0--a039--cce126484330'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7WG3V5-C9T6-AL8i-e53y-TJpC-FvQt-dmIRlR', 'scsi-0QEMU_QEMU_HARDDISK_a273c01c-52c4-42f8-a181-d91a87ff3a5e', 'scsi-SQEMU_QEMU_HARDDISK_a273c01c-52c4-42f8-a181-d91a87ff3a5e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552395 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a129606c-fab1-48ed-9350-9d2eafddbd52', 'scsi-SQEMU_QEMU_HARDDISK_a129606c-fab1-48ed-9350-9d2eafddbd52'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552411 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-11-16-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 12:08:24.552422 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.552432 | orchestrator | 2025-06-22 12:08:24.552442 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-22 12:08:24.552452 | orchestrator | Sunday 22 June 2025 12:06:30 +0000 (0:00:00.628) 0:00:16.450 *********** 2025-06-22 12:08:24.552462 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:08:24.552478 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:08:24.552487 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:08:24.552497 | orchestrator | 2025-06-22 12:08:24.552507 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-22 12:08:24.552517 | orchestrator | Sunday 22 June 2025 12:06:31 +0000 (0:00:00.658) 0:00:17.109 *********** 2025-06-22 12:08:24.552526 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:08:24.552536 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:08:24.552545 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:08:24.552555 | orchestrator | 2025-06-22 12:08:24.552565 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-22 12:08:24.552575 | orchestrator | Sunday 22 June 2025 12:06:32 +0000 (0:00:00.490) 0:00:17.599 *********** 2025-06-22 12:08:24.552584 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:08:24.552594 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:08:24.552603 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:08:24.552613 | orchestrator | 2025-06-22 12:08:24.552623 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-22 12:08:24.552632 | orchestrator | Sunday 22 June 2025 12:06:32 +0000 (0:00:00.628) 0:00:18.228 *********** 2025-06-22 12:08:24.552642 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.552652 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.552661 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.552671 | orchestrator | 2025-06-22 12:08:24.552681 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-22 12:08:24.552690 | orchestrator | Sunday 22 June 2025 12:06:33 +0000 (0:00:00.300) 0:00:18.529 *********** 2025-06-22 12:08:24.552700 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.552709 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.552719 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.552729 | orchestrator | 2025-06-22 12:08:24.552738 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-22 12:08:24.552747 | orchestrator | Sunday 22 June 2025 12:06:33 +0000 (0:00:00.419) 0:00:18.948 *********** 2025-06-22 12:08:24.552757 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.552767 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.552776 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.552786 | orchestrator | 2025-06-22 12:08:24.552795 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-22 12:08:24.552805 | orchestrator | Sunday 22 June 2025 12:06:34 +0000 (0:00:00.510) 0:00:19.459 *********** 2025-06-22 12:08:24.552815 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-22 12:08:24.552825 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-22 12:08:24.552834 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-22 12:08:24.552844 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-22 12:08:24.552853 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-22 12:08:24.552863 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-22 12:08:24.552872 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-22 12:08:24.552882 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-22 12:08:24.552891 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-22 12:08:24.552901 | orchestrator | 2025-06-22 12:08:24.552911 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-22 12:08:24.552920 | orchestrator | Sunday 22 June 2025 12:06:34 +0000 (0:00:00.827) 0:00:20.287 *********** 2025-06-22 12:08:24.552929 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 12:08:24.552939 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 12:08:24.552949 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 12:08:24.552958 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.552968 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-22 12:08:24.552984 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-22 12:08:24.552998 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-22 12:08:24.553008 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.553018 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-22 12:08:24.553027 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-22 12:08:24.553037 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-22 12:08:24.553046 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.553072 | orchestrator | 2025-06-22 12:08:24.553082 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-22 12:08:24.553092 | orchestrator | Sunday 22 June 2025 12:06:35 +0000 (0:00:00.351) 0:00:20.639 *********** 2025-06-22 12:08:24.553102 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:08:24.553112 | orchestrator | 2025-06-22 12:08:24.553122 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-22 12:08:24.553133 | orchestrator | Sunday 22 June 2025 12:06:35 +0000 (0:00:00.702) 0:00:21.341 *********** 2025-06-22 12:08:24.553147 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.553157 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.553167 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.553177 | orchestrator | 2025-06-22 12:08:24.553186 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-22 12:08:24.553196 | orchestrator | Sunday 22 June 2025 12:06:36 +0000 (0:00:00.325) 0:00:21.667 *********** 2025-06-22 12:08:24.553206 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.553216 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.553225 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.553235 | orchestrator | 2025-06-22 12:08:24.553245 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-22 12:08:24.553255 | orchestrator | Sunday 22 June 2025 12:06:36 +0000 (0:00:00.303) 0:00:21.971 *********** 2025-06-22 12:08:24.553264 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.553274 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.553284 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:08:24.553294 | orchestrator | 2025-06-22 12:08:24.553304 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-22 12:08:24.553313 | orchestrator | Sunday 22 June 2025 12:06:36 +0000 (0:00:00.330) 0:00:22.302 *********** 2025-06-22 12:08:24.553323 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:08:24.553333 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:08:24.553342 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:08:24.553352 | orchestrator | 2025-06-22 12:08:24.553362 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-22 12:08:24.553372 | orchestrator | Sunday 22 June 2025 12:06:37 +0000 (0:00:00.584) 0:00:22.886 *********** 2025-06-22 12:08:24.553381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 12:08:24.553391 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 12:08:24.553400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 12:08:24.553410 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.553420 | orchestrator | 2025-06-22 12:08:24.553430 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-22 12:08:24.553440 | orchestrator | Sunday 22 June 2025 12:06:37 +0000 (0:00:00.382) 0:00:23.269 *********** 2025-06-22 12:08:24.553449 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 12:08:24.553459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 12:08:24.553468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 12:08:24.553478 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.553488 | orchestrator | 2025-06-22 12:08:24.553497 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-22 12:08:24.553513 | orchestrator | Sunday 22 June 2025 12:06:38 +0000 (0:00:00.354) 0:00:23.623 *********** 2025-06-22 12:08:24.553523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 12:08:24.553533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 12:08:24.553543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 12:08:24.553552 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.553562 | orchestrator | 2025-06-22 12:08:24.553572 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-22 12:08:24.553581 | orchestrator | Sunday 22 June 2025 12:06:38 +0000 (0:00:00.340) 0:00:23.964 *********** 2025-06-22 12:08:24.553591 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:08:24.553601 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:08:24.553611 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:08:24.553620 | orchestrator | 2025-06-22 12:08:24.553630 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-22 12:08:24.553640 | orchestrator | Sunday 22 June 2025 12:06:38 +0000 (0:00:00.317) 0:00:24.282 *********** 2025-06-22 12:08:24.553650 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-22 12:08:24.553659 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-22 12:08:24.553669 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-22 12:08:24.553679 | orchestrator | 2025-06-22 12:08:24.553689 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-22 12:08:24.553698 | orchestrator | Sunday 22 June 2025 12:06:39 +0000 (0:00:00.500) 0:00:24.782 *********** 2025-06-22 12:08:24.553708 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 12:08:24.553718 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 12:08:24.553727 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 12:08:24.553737 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-22 12:08:24.553747 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-22 12:08:24.553761 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-22 12:08:24.553771 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-22 12:08:24.553781 | orchestrator | 2025-06-22 12:08:24.553790 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-22 12:08:24.553800 | orchestrator | Sunday 22 June 2025 12:06:40 +0000 (0:00:00.969) 0:00:25.751 *********** 2025-06-22 12:08:24.553810 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 12:08:24.553820 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 12:08:24.553829 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 12:08:24.553839 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-22 12:08:24.553849 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-22 12:08:24.553859 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-22 12:08:24.553872 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-22 12:08:24.553882 | orchestrator | 2025-06-22 12:08:24.553892 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-06-22 12:08:24.553902 | orchestrator | Sunday 22 June 2025 12:06:42 +0000 (0:00:01.913) 0:00:27.665 *********** 2025-06-22 12:08:24.553912 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:08:24.553921 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:08:24.553931 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-06-22 12:08:24.553941 | orchestrator | 2025-06-22 12:08:24.553951 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-06-22 12:08:24.553967 | orchestrator | Sunday 22 June 2025 12:06:42 +0000 (0:00:00.378) 0:00:28.043 *********** 2025-06-22 12:08:24.553979 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 12:08:24.553990 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 12:08:24.554000 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 12:08:24.554010 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 12:08:24.554064 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 12:08:24.554075 | orchestrator | 2025-06-22 12:08:24.554085 | orchestrator | TASK [generate keys] *********************************************************** 2025-06-22 12:08:24.554094 | orchestrator | Sunday 22 June 2025 12:07:27 +0000 (0:00:45.341) 0:01:13.384 *********** 2025-06-22 12:08:24.554104 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:08:24.554114 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:08:24.554123 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:08:24.554133 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:08:24.554142 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:08:24.554152 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:08:24.554162 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-06-22 12:08:24.554171 | orchestrator | 2025-06-22 12:08:24.554181 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-06-22 12:08:24.554190 | orchestrator | Sunday 22 June 2025 12:07:52 +0000 (0:00:24.426) 0:01:37.810 *********** 2025-06-22 12:08:24.554200 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:08:24.554209 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:08:24.554219 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:08:24.554229 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:08:24.554243 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:08:24.554253 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:08:24.554263 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 12:08:24.554272 | orchestrator | 2025-06-22 12:08:24.554282 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-06-22 12:08:24.554292 | orchestrator | Sunday 22 June 2025 12:08:04 +0000 (0:00:11.771) 0:01:49.582 *********** 2025-06-22 12:08:24.554308 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:08:24.554318 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 12:08:24.554328 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 12:08:24.554337 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:08:24.554347 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 12:08:24.554362 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 12:08:24.554372 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:08:24.554382 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 12:08:24.554392 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 12:08:24.554402 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:08:24.554412 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 12:08:24.554421 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 12:08:24.554431 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:08:24.554441 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 12:08:24.554450 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 12:08:24.554460 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 12:08:24.554470 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 12:08:24.554479 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 12:08:24.554489 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-06-22 12:08:24.554499 | orchestrator | 2025-06-22 12:08:24.554508 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:08:24.554518 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-06-22 12:08:24.554529 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-22 12:08:24.554539 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-22 12:08:24.554548 | orchestrator | 2025-06-22 12:08:24.554558 | orchestrator | 2025-06-22 12:08:24.554568 | orchestrator | 2025-06-22 12:08:24.554577 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:08:24.554587 | orchestrator | Sunday 22 June 2025 12:08:21 +0000 (0:00:17.539) 0:02:07.122 *********** 2025-06-22 12:08:24.554597 | orchestrator | =============================================================================== 2025-06-22 12:08:24.554606 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.34s 2025-06-22 12:08:24.554616 | orchestrator | generate keys ---------------------------------------------------------- 24.43s 2025-06-22 12:08:24.554625 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.54s 2025-06-22 12:08:24.554635 | orchestrator | get keys from monitors ------------------------------------------------- 11.77s 2025-06-22 12:08:24.554644 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.10s 2025-06-22 12:08:24.554654 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.91s 2025-06-22 12:08:24.554664 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.69s 2025-06-22 12:08:24.554673 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.97s 2025-06-22 12:08:24.554690 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.83s 2025-06-22 12:08:24.554700 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.78s 2025-06-22 12:08:24.554710 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.73s 2025-06-22 12:08:24.554719 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.70s 2025-06-22 12:08:24.554729 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.70s 2025-06-22 12:08:24.554739 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.66s 2025-06-22 12:08:24.554748 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.63s 2025-06-22 12:08:24.554758 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.63s 2025-06-22 12:08:24.554772 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.62s 2025-06-22 12:08:24.554782 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.59s 2025-06-22 12:08:24.554792 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.58s 2025-06-22 12:08:24.554801 | orchestrator | ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks --- 0.56s 2025-06-22 12:08:27.597016 | orchestrator | 2025-06-22 12:08:27 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:27.598709 | orchestrator | 2025-06-22 12:08:27 | INFO  | Task 744f22dd-a594-4bb8-8db2-028635ed03f0 is in state STARTED 2025-06-22 12:08:27.600433 | orchestrator | 2025-06-22 12:08:27 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:27.600474 | orchestrator | 2025-06-22 12:08:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:30.647461 | orchestrator | 2025-06-22 12:08:30 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:30.649318 | orchestrator | 2025-06-22 12:08:30 | INFO  | Task 744f22dd-a594-4bb8-8db2-028635ed03f0 is in state STARTED 2025-06-22 12:08:30.651808 | orchestrator | 2025-06-22 12:08:30 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:30.652400 | orchestrator | 2025-06-22 12:08:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:33.711854 | orchestrator | 2025-06-22 12:08:33 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:33.715604 | orchestrator | 2025-06-22 12:08:33 | INFO  | Task 744f22dd-a594-4bb8-8db2-028635ed03f0 is in state STARTED 2025-06-22 12:08:33.717214 | orchestrator | 2025-06-22 12:08:33 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:33.717242 | orchestrator | 2025-06-22 12:08:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:36.767822 | orchestrator | 2025-06-22 12:08:36 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:36.769270 | orchestrator | 2025-06-22 12:08:36 | INFO  | Task 744f22dd-a594-4bb8-8db2-028635ed03f0 is in state STARTED 2025-06-22 12:08:36.771446 | orchestrator | 2025-06-22 12:08:36 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:36.771542 | orchestrator | 2025-06-22 12:08:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:39.826826 | orchestrator | 2025-06-22 12:08:39 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:39.827753 | orchestrator | 2025-06-22 12:08:39 | INFO  | Task 744f22dd-a594-4bb8-8db2-028635ed03f0 is in state STARTED 2025-06-22 12:08:39.829302 | orchestrator | 2025-06-22 12:08:39 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:39.829566 | orchestrator | 2025-06-22 12:08:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:42.879226 | orchestrator | 2025-06-22 12:08:42 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:42.880567 | orchestrator | 2025-06-22 12:08:42 | INFO  | Task 744f22dd-a594-4bb8-8db2-028635ed03f0 is in state STARTED 2025-06-22 12:08:42.882517 | orchestrator | 2025-06-22 12:08:42 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:42.882609 | orchestrator | 2025-06-22 12:08:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:45.930382 | orchestrator | 2025-06-22 12:08:45 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:45.931048 | orchestrator | 2025-06-22 12:08:45 | INFO  | Task 744f22dd-a594-4bb8-8db2-028635ed03f0 is in state STARTED 2025-06-22 12:08:45.932560 | orchestrator | 2025-06-22 12:08:45 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:45.932717 | orchestrator | 2025-06-22 12:08:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:48.979737 | orchestrator | 2025-06-22 12:08:48 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:48.982201 | orchestrator | 2025-06-22 12:08:48 | INFO  | Task 744f22dd-a594-4bb8-8db2-028635ed03f0 is in state STARTED 2025-06-22 12:08:48.983540 | orchestrator | 2025-06-22 12:08:48 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:48.983568 | orchestrator | 2025-06-22 12:08:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:52.030748 | orchestrator | 2025-06-22 12:08:52 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:52.032392 | orchestrator | 2025-06-22 12:08:52 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state STARTED 2025-06-22 12:08:52.034877 | orchestrator | 2025-06-22 12:08:52 | INFO  | Task 744f22dd-a594-4bb8-8db2-028635ed03f0 is in state SUCCESS 2025-06-22 12:08:52.037925 | orchestrator | 2025-06-22 12:08:52 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:52.037960 | orchestrator | 2025-06-22 12:08:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:55.082148 | orchestrator | 2025-06-22 12:08:55 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:55.085243 | orchestrator | 2025-06-22 12:08:55 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state STARTED 2025-06-22 12:08:55.086995 | orchestrator | 2025-06-22 12:08:55 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:55.087024 | orchestrator | 2025-06-22 12:08:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:08:58.143466 | orchestrator | 2025-06-22 12:08:58 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:08:58.144951 | orchestrator | 2025-06-22 12:08:58 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state STARTED 2025-06-22 12:08:58.148998 | orchestrator | 2025-06-22 12:08:58 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:08:58.149077 | orchestrator | 2025-06-22 12:08:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:01.196396 | orchestrator | 2025-06-22 12:09:01 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:09:01.197368 | orchestrator | 2025-06-22 12:09:01 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state STARTED 2025-06-22 12:09:01.198731 | orchestrator | 2025-06-22 12:09:01 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state STARTED 2025-06-22 12:09:01.198836 | orchestrator | 2025-06-22 12:09:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:04.237594 | orchestrator | 2025-06-22 12:09:04 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:09:04.237698 | orchestrator | 2025-06-22 12:09:04 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state STARTED 2025-06-22 12:09:04.241054 | orchestrator | 2025-06-22 12:09:04 | INFO  | Task 3756e2df-3e13-4719-98e8-ec80558df39c is in state SUCCESS 2025-06-22 12:09:04.243662 | orchestrator | 2025-06-22 12:09:04.243713 | orchestrator | 2025-06-22 12:09:04.243793 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-06-22 12:09:04.243807 | orchestrator | 2025-06-22 12:09:04.243818 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-06-22 12:09:04.244121 | orchestrator | Sunday 22 June 2025 12:08:26 +0000 (0:00:00.159) 0:00:00.159 *********** 2025-06-22 12:09:04.244141 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-06-22 12:09:04.244154 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-22 12:09:04.244165 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-22 12:09:04.244176 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 12:09:04.244187 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-22 12:09:04.244198 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-06-22 12:09:04.244208 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-06-22 12:09:04.244219 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-06-22 12:09:04.244229 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-06-22 12:09:04.244240 | orchestrator | 2025-06-22 12:09:04.244251 | orchestrator | TASK [Create share directory] ************************************************** 2025-06-22 12:09:04.244262 | orchestrator | Sunday 22 June 2025 12:08:30 +0000 (0:00:03.886) 0:00:04.046 *********** 2025-06-22 12:09:04.244273 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 12:09:04.244284 | orchestrator | 2025-06-22 12:09:04.244295 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-06-22 12:09:04.244306 | orchestrator | Sunday 22 June 2025 12:08:31 +0000 (0:00:00.941) 0:00:04.987 *********** 2025-06-22 12:09:04.244318 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-06-22 12:09:04.244329 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-22 12:09:04.244340 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-22 12:09:04.244351 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 12:09:04.244378 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-22 12:09:04.244389 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-06-22 12:09:04.244400 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-06-22 12:09:04.244410 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-06-22 12:09:04.244421 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-06-22 12:09:04.244432 | orchestrator | 2025-06-22 12:09:04.244442 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-06-22 12:09:04.244453 | orchestrator | Sunday 22 June 2025 12:08:43 +0000 (0:00:12.638) 0:00:17.626 *********** 2025-06-22 12:09:04.244486 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-06-22 12:09:04.244498 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-22 12:09:04.244508 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-22 12:09:04.244519 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 12:09:04.244530 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-22 12:09:04.244540 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-06-22 12:09:04.244551 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-06-22 12:09:04.244562 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-06-22 12:09:04.244572 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-06-22 12:09:04.244583 | orchestrator | 2025-06-22 12:09:04.244594 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:09:04.244605 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:09:04.244617 | orchestrator | 2025-06-22 12:09:04.244627 | orchestrator | 2025-06-22 12:09:04.244638 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:09:04.244649 | orchestrator | Sunday 22 June 2025 12:08:50 +0000 (0:00:06.771) 0:00:24.398 *********** 2025-06-22 12:09:04.244659 | orchestrator | =============================================================================== 2025-06-22 12:09:04.244670 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.64s 2025-06-22 12:09:04.244681 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.77s 2025-06-22 12:09:04.244692 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.89s 2025-06-22 12:09:04.244702 | orchestrator | Create share directory -------------------------------------------------- 0.94s 2025-06-22 12:09:04.244713 | orchestrator | 2025-06-22 12:09:04.244724 | orchestrator | 2025-06-22 12:09:04.244735 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:09:04.244747 | orchestrator | 2025-06-22 12:09:04.244774 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:09:04.244787 | orchestrator | Sunday 22 June 2025 12:07:17 +0000 (0:00:00.189) 0:00:00.189 *********** 2025-06-22 12:09:04.244799 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:09:04.244812 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:09:04.244824 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:09:04.244837 | orchestrator | 2025-06-22 12:09:04.244849 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:09:04.244862 | orchestrator | Sunday 22 June 2025 12:07:17 +0000 (0:00:00.221) 0:00:00.411 *********** 2025-06-22 12:09:04.244874 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-06-22 12:09:04.244888 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-06-22 12:09:04.244900 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-06-22 12:09:04.244912 | orchestrator | 2025-06-22 12:09:04.244924 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-06-22 12:09:04.244937 | orchestrator | 2025-06-22 12:09:04.244974 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 12:09:04.244988 | orchestrator | Sunday 22 June 2025 12:07:18 +0000 (0:00:00.292) 0:00:00.703 *********** 2025-06-22 12:09:04.245000 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:09:04.245013 | orchestrator | 2025-06-22 12:09:04.245025 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-06-22 12:09:04.245037 | orchestrator | Sunday 22 June 2025 12:07:18 +0000 (0:00:00.430) 0:00:01.134 *********** 2025-06-22 12:09:04.245065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 12:09:04.245104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 12:09:04.245132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 12:09:04.245145 | orchestrator | 2025-06-22 12:09:04.245156 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-06-22 12:09:04.245274 | orchestrator | Sunday 22 June 2025 12:07:19 +0000 (0:00:00.882) 0:00:02.017 *********** 2025-06-22 12:09:04.245287 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:09:04.245298 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:09:04.245309 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:09:04.245319 | orchestrator | 2025-06-22 12:09:04.245330 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 12:09:04.245341 | orchestrator | Sunday 22 June 2025 12:07:19 +0000 (0:00:00.363) 0:00:02.381 *********** 2025-06-22 12:09:04.245352 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-22 12:09:04.245370 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-22 12:09:04.245382 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-06-22 12:09:04.245393 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-06-22 12:09:04.245404 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-06-22 12:09:04.245415 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-06-22 12:09:04.245425 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-06-22 12:09:04.245436 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-06-22 12:09:04.245447 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-22 12:09:04.245466 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-22 12:09:04.245477 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-06-22 12:09:04.245488 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-06-22 12:09:04.245499 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-06-22 12:09:04.245509 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-06-22 12:09:04.245520 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-06-22 12:09:04.245531 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-06-22 12:09:04.245542 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-22 12:09:04.245553 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-22 12:09:04.245563 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-06-22 12:09:04.245574 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-06-22 12:09:04.245585 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-06-22 12:09:04.245596 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-06-22 12:09:04.245606 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-06-22 12:09:04.245617 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-06-22 12:09:04.245635 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-06-22 12:09:04.245648 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-06-22 12:09:04.245659 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-06-22 12:09:04.245670 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-06-22 12:09:04.245681 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-06-22 12:09:04.245692 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-06-22 12:09:04.245703 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-06-22 12:09:04.245714 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-06-22 12:09:04.245725 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-06-22 12:09:04.245736 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-06-22 12:09:04.245746 | orchestrator | 2025-06-22 12:09:04.245757 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 12:09:04.245768 | orchestrator | Sunday 22 June 2025 12:07:20 +0000 (0:00:00.652) 0:00:03.033 *********** 2025-06-22 12:09:04.245779 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:09:04.245790 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:09:04.245801 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:09:04.245818 | orchestrator | 2025-06-22 12:09:04.245829 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 12:09:04.245840 | orchestrator | Sunday 22 June 2025 12:07:20 +0000 (0:00:00.253) 0:00:03.286 *********** 2025-06-22 12:09:04.245851 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.245862 | orchestrator | 2025-06-22 12:09:04.245878 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 12:09:04.245890 | orchestrator | Sunday 22 June 2025 12:07:20 +0000 (0:00:00.120) 0:00:03.407 *********** 2025-06-22 12:09:04.245903 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.245916 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:04.245929 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:04.245941 | orchestrator | 2025-06-22 12:09:04.245983 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 12:09:04.245996 | orchestrator | Sunday 22 June 2025 12:07:21 +0000 (0:00:00.401) 0:00:03.809 *********** 2025-06-22 12:09:04.246010 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:09:04.246078 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:09:04.246091 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:09:04.246104 | orchestrator | 2025-06-22 12:09:04.246115 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 12:09:04.246128 | orchestrator | Sunday 22 June 2025 12:07:21 +0000 (0:00:00.264) 0:00:04.073 *********** 2025-06-22 12:09:04.246140 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.246152 | orchestrator | 2025-06-22 12:09:04.246164 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 12:09:04.246176 | orchestrator | Sunday 22 June 2025 12:07:21 +0000 (0:00:00.122) 0:00:04.195 *********** 2025-06-22 12:09:04.246188 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.246201 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:04.246213 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:04.246226 | orchestrator | 2025-06-22 12:09:04.246239 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 12:09:04.246252 | orchestrator | Sunday 22 June 2025 12:07:21 +0000 (0:00:00.243) 0:00:04.439 *********** 2025-06-22 12:09:04.246263 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:09:04.246274 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:09:04.246285 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:09:04.246296 | orchestrator | 2025-06-22 12:09:04.246307 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 12:09:04.246317 | orchestrator | Sunday 22 June 2025 12:07:22 +0000 (0:00:00.305) 0:00:04.744 *********** 2025-06-22 12:09:04.246328 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.246339 | orchestrator | 2025-06-22 12:09:04.246350 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 12:09:04.246360 | orchestrator | Sunday 22 June 2025 12:07:22 +0000 (0:00:00.345) 0:00:05.090 *********** 2025-06-22 12:09:04.246371 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.246382 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:04.246392 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:04.246403 | orchestrator | 2025-06-22 12:09:04.246413 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 12:09:04.246424 | orchestrator | Sunday 22 June 2025 12:07:22 +0000 (0:00:00.300) 0:00:05.391 *********** 2025-06-22 12:09:04.246457 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:09:04.246468 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:09:04.246478 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:09:04.246489 | orchestrator | 2025-06-22 12:09:04.246506 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 12:09:04.246517 | orchestrator | Sunday 22 June 2025 12:07:23 +0000 (0:00:00.364) 0:00:05.756 *********** 2025-06-22 12:09:04.246527 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.246538 | orchestrator | 2025-06-22 12:09:04.246549 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 12:09:04.246568 | orchestrator | Sunday 22 June 2025 12:07:23 +0000 (0:00:00.142) 0:00:05.898 *********** 2025-06-22 12:09:04.246579 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.246590 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:04.246600 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:04.246611 | orchestrator | 2025-06-22 12:09:04.246622 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 12:09:04.246633 | orchestrator | Sunday 22 June 2025 12:07:23 +0000 (0:00:00.269) 0:00:06.168 *********** 2025-06-22 12:09:04.246644 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:09:04.246655 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:09:04.246666 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:09:04.246677 | orchestrator | 2025-06-22 12:09:04.246688 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 12:09:04.246698 | orchestrator | Sunday 22 June 2025 12:07:24 +0000 (0:00:00.503) 0:00:06.671 *********** 2025-06-22 12:09:04.246709 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.246720 | orchestrator | 2025-06-22 12:09:04.246731 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 12:09:04.246742 | orchestrator | Sunday 22 June 2025 12:07:24 +0000 (0:00:00.138) 0:00:06.810 *********** 2025-06-22 12:09:04.246752 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.246763 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:04.246774 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:04.246785 | orchestrator | 2025-06-22 12:09:04.246796 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 12:09:04.246807 | orchestrator | Sunday 22 June 2025 12:07:24 +0000 (0:00:00.299) 0:00:07.109 *********** 2025-06-22 12:09:04.246817 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:09:04.246828 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:09:04.246839 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:09:04.246850 | orchestrator | 2025-06-22 12:09:04.246861 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 12:09:04.246872 | orchestrator | Sunday 22 June 2025 12:07:24 +0000 (0:00:00.326) 0:00:07.436 *********** 2025-06-22 12:09:04.246882 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.246893 | orchestrator | 2025-06-22 12:09:04.246918 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 12:09:04.246940 | orchestrator | Sunday 22 June 2025 12:07:25 +0000 (0:00:00.169) 0:00:07.605 *********** 2025-06-22 12:09:04.247013 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.247025 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:04.247036 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:04.247047 | orchestrator | 2025-06-22 12:09:04.247057 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 12:09:04.247068 | orchestrator | Sunday 22 June 2025 12:07:25 +0000 (0:00:00.503) 0:00:08.109 *********** 2025-06-22 12:09:04.247079 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:09:04.247099 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:09:04.247111 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:09:04.247121 | orchestrator | 2025-06-22 12:09:04.247132 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 12:09:04.247143 | orchestrator | Sunday 22 June 2025 12:07:25 +0000 (0:00:00.325) 0:00:08.434 *********** 2025-06-22 12:09:04.247153 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.247162 | orchestrator | 2025-06-22 12:09:04.247172 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 12:09:04.247181 | orchestrator | Sunday 22 June 2025 12:07:26 +0000 (0:00:00.127) 0:00:08.562 *********** 2025-06-22 12:09:04.247191 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.247201 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:04.247210 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:04.247220 | orchestrator | 2025-06-22 12:09:04.247229 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 12:09:04.247239 | orchestrator | Sunday 22 June 2025 12:07:26 +0000 (0:00:00.284) 0:00:08.847 *********** 2025-06-22 12:09:04.247255 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:09:04.247265 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:09:04.247275 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:09:04.247284 | orchestrator | 2025-06-22 12:09:04.247294 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 12:09:04.247303 | orchestrator | Sunday 22 June 2025 12:07:26 +0000 (0:00:00.341) 0:00:09.188 *********** 2025-06-22 12:09:04.247313 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.247323 | orchestrator | 2025-06-22 12:09:04.247332 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 12:09:04.247342 | orchestrator | Sunday 22 June 2025 12:07:26 +0000 (0:00:00.122) 0:00:09.311 *********** 2025-06-22 12:09:04.247352 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.247361 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:04.247371 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:04.247380 | orchestrator | 2025-06-22 12:09:04.247390 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 12:09:04.247400 | orchestrator | Sunday 22 June 2025 12:07:27 +0000 (0:00:00.604) 0:00:09.915 *********** 2025-06-22 12:09:04.247409 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:09:04.247419 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:09:04.247428 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:09:04.247438 | orchestrator | 2025-06-22 12:09:04.247447 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 12:09:04.247457 | orchestrator | Sunday 22 June 2025 12:07:27 +0000 (0:00:00.326) 0:00:10.242 *********** 2025-06-22 12:09:04.247466 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.247476 | orchestrator | 2025-06-22 12:09:04.247486 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 12:09:04.247495 | orchestrator | Sunday 22 June 2025 12:07:27 +0000 (0:00:00.152) 0:00:10.394 *********** 2025-06-22 12:09:04.247505 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.247514 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:04.247529 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:04.247539 | orchestrator | 2025-06-22 12:09:04.247548 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 12:09:04.247558 | orchestrator | Sunday 22 June 2025 12:07:28 +0000 (0:00:00.371) 0:00:10.765 *********** 2025-06-22 12:09:04.247568 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:09:04.247577 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:09:04.247587 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:09:04.247596 | orchestrator | 2025-06-22 12:09:04.247606 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 12:09:04.247616 | orchestrator | Sunday 22 June 2025 12:07:28 +0000 (0:00:00.496) 0:00:11.262 *********** 2025-06-22 12:09:04.247625 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.247635 | orchestrator | 2025-06-22 12:09:04.247645 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 12:09:04.247654 | orchestrator | Sunday 22 June 2025 12:07:28 +0000 (0:00:00.162) 0:00:11.424 *********** 2025-06-22 12:09:04.247664 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.247674 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:04.247683 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:04.247693 | orchestrator | 2025-06-22 12:09:04.247702 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-06-22 12:09:04.247712 | orchestrator | Sunday 22 June 2025 12:07:29 +0000 (0:00:00.318) 0:00:11.743 *********** 2025-06-22 12:09:04.247721 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:09:04.247731 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:09:04.247740 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:09:04.247750 | orchestrator | 2025-06-22 12:09:04.247759 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-06-22 12:09:04.247769 | orchestrator | Sunday 22 June 2025 12:07:30 +0000 (0:00:01.569) 0:00:13.313 *********** 2025-06-22 12:09:04.247790 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-22 12:09:04.247799 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-22 12:09:04.247809 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-22 12:09:04.247818 | orchestrator | 2025-06-22 12:09:04.247828 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-06-22 12:09:04.247838 | orchestrator | Sunday 22 June 2025 12:07:32 +0000 (0:00:01.915) 0:00:15.228 *********** 2025-06-22 12:09:04.247847 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-22 12:09:04.247857 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-22 12:09:04.247866 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-22 12:09:04.247876 | orchestrator | 2025-06-22 12:09:04.247886 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-06-22 12:09:04.247900 | orchestrator | Sunday 22 June 2025 12:07:34 +0000 (0:00:02.148) 0:00:17.377 *********** 2025-06-22 12:09:04.247910 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-22 12:09:04.247920 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-22 12:09:04.247930 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-22 12:09:04.247940 | orchestrator | 2025-06-22 12:09:04.247964 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-06-22 12:09:04.247974 | orchestrator | Sunday 22 June 2025 12:07:36 +0000 (0:00:01.629) 0:00:19.006 *********** 2025-06-22 12:09:04.247983 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.247993 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:04.248002 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:04.248012 | orchestrator | 2025-06-22 12:09:04.248021 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-06-22 12:09:04.248031 | orchestrator | Sunday 22 June 2025 12:07:36 +0000 (0:00:00.301) 0:00:19.308 *********** 2025-06-22 12:09:04.248040 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.248050 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:04.248059 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:04.248069 | orchestrator | 2025-06-22 12:09:04.248078 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 12:09:04.248088 | orchestrator | Sunday 22 June 2025 12:07:37 +0000 (0:00:00.268) 0:00:19.577 *********** 2025-06-22 12:09:04.248098 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:09:04.248107 | orchestrator | 2025-06-22 12:09:04.248117 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-06-22 12:09:04.248126 | orchestrator | Sunday 22 June 2025 12:07:37 +0000 (0:00:00.796) 0:00:20.373 *********** 2025-06-22 12:09:04.248143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 12:09:04.248171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 12:09:04.248189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 12:09:04.248206 | orchestrator | 2025-06-22 12:09:04.248215 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-06-22 12:09:04.248225 | orchestrator | Sunday 22 June 2025 12:07:39 +0000 (0:00:01.670) 0:00:22.044 *********** 2025-06-22 12:09:04.248248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 12:09:04.248266 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.248283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 12:09:04.248294 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:04.248310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 12:09:04.248330 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:04.248340 | orchestrator | 2025-06-22 12:09:04.248350 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-06-22 12:09:04.248360 | orchestrator | Sunday 22 June 2025 12:07:40 +0000 (0:00:00.727) 0:00:22.771 *********** 2025-06-22 12:09:04.248377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 12:09:04.248388 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.248404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 12:09:04.248423 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:04.248440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 12:09:04.248451 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:04.248461 | orchestrator | 2025-06-22 12:09:04.248470 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-06-22 12:09:04.248480 | orchestrator | Sunday 22 June 2025 12:07:41 +0000 (0:00:01.176) 0:00:23.947 *********** 2025-06-22 12:09:04.248495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 12:09:04.248520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 12:09:04.248537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 12:09:04.248555 | orchestrator | 2025-06-22 12:09:04.248565 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 12:09:04.248574 | orchestrator | Sunday 22 June 2025 12:07:42 +0000 (0:00:01.239) 0:00:25.187 *********** 2025-06-22 12:09:04.248584 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:04.248594 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:04.248604 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:04.248614 | orchestrator | 2025-06-22 12:09:04.248623 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 12:09:04.248633 | orchestrator | Sunday 22 June 2025 12:07:42 +0000 (0:00:00.315) 0:00:25.502 *********** 2025-06-22 12:09:04.248648 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:09:04.248658 | orchestrator | 2025-06-22 12:09:04.248668 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-06-22 12:09:04.248677 | orchestrator | Sunday 22 June 2025 12:07:43 +0000 (0:00:00.737) 0:00:26.239 *********** 2025-06-22 12:09:04.248687 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:09:04.248696 | orchestrator | 2025-06-22 12:09:04.248706 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-06-22 12:09:04.248715 | orchestrator | Sunday 22 June 2025 12:07:45 +0000 (0:00:02.244) 0:00:28.484 *********** 2025-06-22 12:09:04.248725 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:09:04.248735 | orchestrator | 2025-06-22 12:09:04.248744 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-06-22 12:09:04.248754 | orchestrator | Sunday 22 June 2025 12:07:48 +0000 (0:00:02.191) 0:00:30.676 *********** 2025-06-22 12:09:04.248763 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:09:04.248773 | orchestrator | 2025-06-22 12:09:04.248782 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-22 12:09:04.248801 | orchestrator | Sunday 22 June 2025 12:08:03 +0000 (0:00:15.581) 0:00:46.257 *********** 2025-06-22 12:09:04.248810 | orchestrator | 2025-06-22 12:09:04.248820 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-22 12:09:04.248834 | orchestrator | Sunday 22 June 2025 12:08:03 +0000 (0:00:00.065) 0:00:46.323 *********** 2025-06-22 12:09:04.248849 | orchestrator | 2025-06-22 12:09:04.248865 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-22 12:09:04.248881 | orchestrator | Sunday 22 June 2025 12:08:03 +0000 (0:00:00.062) 0:00:46.386 *********** 2025-06-22 12:09:04.248898 | orchestrator | 2025-06-22 12:09:04.248915 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-06-22 12:09:04.248925 | orchestrator | Sunday 22 June 2025 12:08:03 +0000 (0:00:00.063) 0:00:46.449 *********** 2025-06-22 12:09:04.248935 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:09:04.248966 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:09:04.248984 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:09:04.249002 | orchestrator | 2025-06-22 12:09:04.249014 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:09:04.249024 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-06-22 12:09:04.249034 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-22 12:09:04.249044 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-22 12:09:04.249054 | orchestrator | 2025-06-22 12:09:04.249064 | orchestrator | 2025-06-22 12:09:04.249080 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:09:04.249089 | orchestrator | Sunday 22 June 2025 12:09:03 +0000 (0:00:59.308) 0:01:45.758 *********** 2025-06-22 12:09:04.249099 | orchestrator | =============================================================================== 2025-06-22 12:09:04.249109 | orchestrator | horizon : Restart horizon container ------------------------------------ 59.31s 2025-06-22 12:09:04.249118 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.58s 2025-06-22 12:09:04.249128 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.24s 2025-06-22 12:09:04.249138 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.19s 2025-06-22 12:09:04.249147 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.15s 2025-06-22 12:09:04.249157 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.92s 2025-06-22 12:09:04.249167 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.67s 2025-06-22 12:09:04.249176 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.63s 2025-06-22 12:09:04.249186 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.57s 2025-06-22 12:09:04.249196 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.24s 2025-06-22 12:09:04.249205 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.18s 2025-06-22 12:09:04.249215 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 0.88s 2025-06-22 12:09:04.249225 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2025-06-22 12:09:04.249234 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2025-06-22 12:09:04.249244 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.73s 2025-06-22 12:09:04.249254 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.65s 2025-06-22 12:09:04.249264 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.60s 2025-06-22 12:09:04.249273 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.50s 2025-06-22 12:09:04.249290 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2025-06-22 12:09:04.249300 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2025-06-22 12:09:04.249310 | orchestrator | 2025-06-22 12:09:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:07.273087 | orchestrator | 2025-06-22 12:09:07 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:09:07.273975 | orchestrator | 2025-06-22 12:09:07 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state STARTED 2025-06-22 12:09:07.274068 | orchestrator | 2025-06-22 12:09:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:10.314378 | orchestrator | 2025-06-22 12:09:10 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:09:10.316196 | orchestrator | 2025-06-22 12:09:10 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state STARTED 2025-06-22 12:09:10.316238 | orchestrator | 2025-06-22 12:09:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:13.359454 | orchestrator | 2025-06-22 12:09:13 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:09:13.360765 | orchestrator | 2025-06-22 12:09:13 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state STARTED 2025-06-22 12:09:13.360800 | orchestrator | 2025-06-22 12:09:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:16.396669 | orchestrator | 2025-06-22 12:09:16 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:09:16.399188 | orchestrator | 2025-06-22 12:09:16 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state STARTED 2025-06-22 12:09:16.399245 | orchestrator | 2025-06-22 12:09:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:19.432626 | orchestrator | 2025-06-22 12:09:19 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:09:19.433654 | orchestrator | 2025-06-22 12:09:19 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state STARTED 2025-06-22 12:09:19.433687 | orchestrator | 2025-06-22 12:09:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:22.485283 | orchestrator | 2025-06-22 12:09:22 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:09:22.487179 | orchestrator | 2025-06-22 12:09:22 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state STARTED 2025-06-22 12:09:22.487219 | orchestrator | 2025-06-22 12:09:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:25.530178 | orchestrator | 2025-06-22 12:09:25 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:09:25.530308 | orchestrator | 2025-06-22 12:09:25 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state STARTED 2025-06-22 12:09:25.530326 | orchestrator | 2025-06-22 12:09:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:28.579344 | orchestrator | 2025-06-22 12:09:28 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:09:28.580946 | orchestrator | 2025-06-22 12:09:28 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state STARTED 2025-06-22 12:09:28.580977 | orchestrator | 2025-06-22 12:09:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:31.623373 | orchestrator | 2025-06-22 12:09:31 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:09:31.624978 | orchestrator | 2025-06-22 12:09:31 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state STARTED 2025-06-22 12:09:31.625169 | orchestrator | 2025-06-22 12:09:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:34.673187 | orchestrator | 2025-06-22 12:09:34 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:09:34.675852 | orchestrator | 2025-06-22 12:09:34 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state STARTED 2025-06-22 12:09:34.675894 | orchestrator | 2025-06-22 12:09:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:37.716967 | orchestrator | 2025-06-22 12:09:37 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:09:37.718522 | orchestrator | 2025-06-22 12:09:37 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state STARTED 2025-06-22 12:09:37.718561 | orchestrator | 2025-06-22 12:09:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:40.759737 | orchestrator | 2025-06-22 12:09:40 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:09:40.760425 | orchestrator | 2025-06-22 12:09:40 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state STARTED 2025-06-22 12:09:40.760466 | orchestrator | 2025-06-22 12:09:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:43.805463 | orchestrator | 2025-06-22 12:09:43 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:09:43.806404 | orchestrator | 2025-06-22 12:09:43 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state STARTED 2025-06-22 12:09:43.806435 | orchestrator | 2025-06-22 12:09:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:46.847352 | orchestrator | 2025-06-22 12:09:46 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:09:46.847473 | orchestrator | 2025-06-22 12:09:46 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state STARTED 2025-06-22 12:09:46.847489 | orchestrator | 2025-06-22 12:09:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:49.897864 | orchestrator | 2025-06-22 12:09:49 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state STARTED 2025-06-22 12:09:49.899250 | orchestrator | 2025-06-22 12:09:49 | INFO  | Task dcc68a73-877a-47f9-9ce4-15b3a8ef3fcb is in state SUCCESS 2025-06-22 12:09:49.899282 | orchestrator | 2025-06-22 12:09:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:52.933724 | orchestrator | 2025-06-22 12:09:52 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:09:52.935213 | orchestrator | 2025-06-22 12:09:52 | INFO  | Task efdca566-32f3-4bf3-b828-6d3937ab7c20 is in state SUCCESS 2025-06-22 12:09:52.936935 | orchestrator | 2025-06-22 12:09:52.936972 | orchestrator | 2025-06-22 12:09:52.936985 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-06-22 12:09:52.936997 | orchestrator | 2025-06-22 12:09:52.937009 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-06-22 12:09:52.937020 | orchestrator | Sunday 22 June 2025 12:08:54 +0000 (0:00:00.232) 0:00:00.232 *********** 2025-06-22 12:09:52.937031 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-06-22 12:09:52.937043 | orchestrator | 2025-06-22 12:09:52.937055 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-06-22 12:09:52.937065 | orchestrator | Sunday 22 June 2025 12:08:55 +0000 (0:00:00.213) 0:00:00.446 *********** 2025-06-22 12:09:52.937151 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-06-22 12:09:52.937284 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-06-22 12:09:52.937300 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-06-22 12:09:52.938263 | orchestrator | 2025-06-22 12:09:52.938344 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-06-22 12:09:52.938359 | orchestrator | Sunday 22 June 2025 12:08:56 +0000 (0:00:01.247) 0:00:01.693 *********** 2025-06-22 12:09:52.938382 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-06-22 12:09:52.938391 | orchestrator | 2025-06-22 12:09:52.938400 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-06-22 12:09:52.938409 | orchestrator | Sunday 22 June 2025 12:08:57 +0000 (0:00:01.063) 0:00:02.756 *********** 2025-06-22 12:09:52.938419 | orchestrator | changed: [testbed-manager] 2025-06-22 12:09:52.938428 | orchestrator | 2025-06-22 12:09:52.938437 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-06-22 12:09:52.938445 | orchestrator | Sunday 22 June 2025 12:08:58 +0000 (0:00:01.028) 0:00:03.784 *********** 2025-06-22 12:09:52.938455 | orchestrator | changed: [testbed-manager] 2025-06-22 12:09:52.938464 | orchestrator | 2025-06-22 12:09:52.938473 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-06-22 12:09:52.938482 | orchestrator | Sunday 22 June 2025 12:08:59 +0000 (0:00:00.877) 0:00:04.662 *********** 2025-06-22 12:09:52.938491 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-06-22 12:09:52.938499 | orchestrator | ok: [testbed-manager] 2025-06-22 12:09:52.938508 | orchestrator | 2025-06-22 12:09:52.938517 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-06-22 12:09:52.938526 | orchestrator | Sunday 22 June 2025 12:09:39 +0000 (0:00:40.617) 0:00:45.280 *********** 2025-06-22 12:09:52.938535 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-06-22 12:09:52.938543 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-06-22 12:09:52.938552 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-06-22 12:09:52.938561 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-06-22 12:09:52.938570 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-06-22 12:09:52.938578 | orchestrator | 2025-06-22 12:09:52.938587 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-06-22 12:09:52.938596 | orchestrator | Sunday 22 June 2025 12:09:43 +0000 (0:00:04.013) 0:00:49.293 *********** 2025-06-22 12:09:52.938604 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-06-22 12:09:52.938613 | orchestrator | 2025-06-22 12:09:52.938622 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-06-22 12:09:52.938631 | orchestrator | Sunday 22 June 2025 12:09:44 +0000 (0:00:00.469) 0:00:49.762 *********** 2025-06-22 12:09:52.938639 | orchestrator | skipping: [testbed-manager] 2025-06-22 12:09:52.938648 | orchestrator | 2025-06-22 12:09:52.938657 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-06-22 12:09:52.938665 | orchestrator | Sunday 22 June 2025 12:09:44 +0000 (0:00:00.122) 0:00:49.884 *********** 2025-06-22 12:09:52.938674 | orchestrator | skipping: [testbed-manager] 2025-06-22 12:09:52.938683 | orchestrator | 2025-06-22 12:09:52.938691 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-06-22 12:09:52.938700 | orchestrator | Sunday 22 June 2025 12:09:44 +0000 (0:00:00.290) 0:00:50.175 *********** 2025-06-22 12:09:52.938709 | orchestrator | changed: [testbed-manager] 2025-06-22 12:09:52.938718 | orchestrator | 2025-06-22 12:09:52.938727 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-06-22 12:09:52.938736 | orchestrator | Sunday 22 June 2025 12:09:46 +0000 (0:00:01.876) 0:00:52.052 *********** 2025-06-22 12:09:52.938745 | orchestrator | changed: [testbed-manager] 2025-06-22 12:09:52.938754 | orchestrator | 2025-06-22 12:09:52.938762 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-06-22 12:09:52.938771 | orchestrator | Sunday 22 June 2025 12:09:47 +0000 (0:00:00.692) 0:00:52.744 *********** 2025-06-22 12:09:52.938780 | orchestrator | changed: [testbed-manager] 2025-06-22 12:09:52.938788 | orchestrator | 2025-06-22 12:09:52.938797 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-06-22 12:09:52.938814 | orchestrator | Sunday 22 June 2025 12:09:47 +0000 (0:00:00.584) 0:00:53.328 *********** 2025-06-22 12:09:52.938823 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-06-22 12:09:52.938831 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-06-22 12:09:52.938840 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-06-22 12:09:52.938849 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-06-22 12:09:52.938857 | orchestrator | 2025-06-22 12:09:52.938866 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:09:52.938875 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 12:09:52.938884 | orchestrator | 2025-06-22 12:09:52.938932 | orchestrator | 2025-06-22 12:09:52.939047 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:09:52.939060 | orchestrator | Sunday 22 June 2025 12:09:49 +0000 (0:00:01.426) 0:00:54.754 *********** 2025-06-22 12:09:52.939069 | orchestrator | =============================================================================== 2025-06-22 12:09:52.939078 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.62s 2025-06-22 12:09:52.939086 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.01s 2025-06-22 12:09:52.939095 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.88s 2025-06-22 12:09:52.939104 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.43s 2025-06-22 12:09:52.939112 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.25s 2025-06-22 12:09:52.939121 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.06s 2025-06-22 12:09:52.939129 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.03s 2025-06-22 12:09:52.939138 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.88s 2025-06-22 12:09:52.939146 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.69s 2025-06-22 12:09:52.939155 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.58s 2025-06-22 12:09:52.939170 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2025-06-22 12:09:52.939179 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-06-22 12:09:52.939187 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2025-06-22 12:09:52.939196 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-06-22 12:09:52.939204 | orchestrator | 2025-06-22 12:09:52.939213 | orchestrator | 2025-06-22 12:09:52.939221 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:09:52.939230 | orchestrator | 2025-06-22 12:09:52.939239 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:09:52.939247 | orchestrator | Sunday 22 June 2025 12:07:17 +0000 (0:00:00.226) 0:00:00.226 *********** 2025-06-22 12:09:52.939256 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:09:52.939264 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:09:52.939273 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:09:52.939282 | orchestrator | 2025-06-22 12:09:52.939290 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:09:52.939299 | orchestrator | Sunday 22 June 2025 12:07:17 +0000 (0:00:00.257) 0:00:00.483 *********** 2025-06-22 12:09:52.939307 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-22 12:09:52.939316 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-22 12:09:52.939325 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-22 12:09:52.939333 | orchestrator | 2025-06-22 12:09:52.939342 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-06-22 12:09:52.939351 | orchestrator | 2025-06-22 12:09:52.939359 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 12:09:52.939375 | orchestrator | Sunday 22 June 2025 12:07:18 +0000 (0:00:00.349) 0:00:00.833 *********** 2025-06-22 12:09:52.939384 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:09:52.939394 | orchestrator | 2025-06-22 12:09:52.939402 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-06-22 12:09:52.939411 | orchestrator | Sunday 22 June 2025 12:07:18 +0000 (0:00:00.483) 0:00:01.316 *********** 2025-06-22 12:09:52.939425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:09:52.939475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:09:52.939493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:09:52.939504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 12:09:52.939521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 12:09:52.939530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 12:09:52.939540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 12:09:52.939658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 12:09:52.939679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 12:09:52.939689 | orchestrator | 2025-06-22 12:09:52.939698 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-06-22 12:09:52.939707 | orchestrator | Sunday 22 June 2025 12:07:20 +0000 (0:00:01.601) 0:00:02.918 *********** 2025-06-22 12:09:52.939716 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-06-22 12:09:52.939725 | orchestrator | 2025-06-22 12:09:52.939734 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-06-22 12:09:52.939750 | orchestrator | Sunday 22 June 2025 12:07:21 +0000 (0:00:00.765) 0:00:03.683 *********** 2025-06-22 12:09:52.939759 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:09:52.939768 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:09:52.939777 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:09:52.939786 | orchestrator | 2025-06-22 12:09:52.939794 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-06-22 12:09:52.939803 | orchestrator | Sunday 22 June 2025 12:07:21 +0000 (0:00:00.401) 0:00:04.085 *********** 2025-06-22 12:09:52.939812 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 12:09:52.939821 | orchestrator | 2025-06-22 12:09:52.939830 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 12:09:52.939839 | orchestrator | Sunday 22 June 2025 12:07:22 +0000 (0:00:00.631) 0:00:04.717 *********** 2025-06-22 12:09:52.939847 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:09:52.939856 | orchestrator | 2025-06-22 12:09:52.939865 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-06-22 12:09:52.939873 | orchestrator | Sunday 22 June 2025 12:07:22 +0000 (0:00:00.525) 0:00:05.242 *********** 2025-06-22 12:09:52.939883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:09:52.939920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:09:52.939935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:09:52.939982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 12:09:52.939994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 12:09:52.940003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 12:09:52.940013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 12:09:52.940032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 12:09:52.940046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 12:09:52.940061 | orchestrator | 2025-06-22 12:09:52.940070 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-06-22 12:09:52.940079 | orchestrator | Sunday 22 June 2025 12:07:26 +0000 (0:00:03.479) 0:00:08.722 *********** 2025-06-22 12:09:52.940089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 12:09:52.940099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 12:09:52.940108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 12:09:52.940117 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:52.940134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 12:09:52.940148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 12:09:52.940168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 12:09:52.940177 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:52.940186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 12:09:52.940196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 12:09:52.940205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 12:09:52.940214 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:52.940224 | orchestrator | 2025-06-22 12:09:52.940238 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-06-22 12:09:52.940247 | orchestrator | Sunday 22 June 2025 12:07:26 +0000 (0:00:00.575) 0:00:09.298 *********** 2025-06-22 12:09:52.940261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 12:09:52.940276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 12:09:52.940285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 12:09:52.940295 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:52.940304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 12:09:52.940318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 12:09:52.940328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 12:09:52.940343 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:52.940357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 12:09:52.940367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 12:09:52.940376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 12:09:52.940385 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:52.940394 | orchestrator | 2025-06-22 12:09:52.940403 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-06-22 12:09:52.940412 | orchestrator | Sunday 22 June 2025 12:07:27 +0000 (0:00:00.815) 0:00:10.113 *********** 2025-06-22 12:09:52.940427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:09:52.940443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:09:52.940483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:09:52.940494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 12:09:52.940504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 12:09:52.940514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 12:09:52.940535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 12:09:52.940549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 12:09:52.940558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 12:09:52.940568 | orchestrator | 2025-06-22 12:09:52.940577 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-06-22 12:09:52.940586 | orchestrator | Sunday 22 June 2025 12:07:31 +0000 (0:00:03.794) 0:00:13.908 *********** 2025-06-22 12:09:52.940595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:09:52.940604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 12:09:52.940626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:09:52.940640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 12:09:52.940650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:09:52.940660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 12:09:52.940669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 12:09:52.940688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 12:09:52.940698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 12:09:52.940707 | orchestrator | 2025-06-22 12:09:52.940716 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-06-22 12:09:52.940725 | orchestrator | Sunday 22 June 2025 12:07:36 +0000 (0:00:05.104) 0:00:19.013 *********** 2025-06-22 12:09:52.940734 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:09:52.940742 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:09:52.940751 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:09:52.940760 | orchestrator | 2025-06-22 12:09:52.940773 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-06-22 12:09:52.940782 | orchestrator | Sunday 22 June 2025 12:07:37 +0000 (0:00:01.342) 0:00:20.356 *********** 2025-06-22 12:09:52.940791 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:52.940799 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:52.940808 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:52.940817 | orchestrator | 2025-06-22 12:09:52.940825 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-06-22 12:09:52.940834 | orchestrator | Sunday 22 June 2025 12:07:38 +0000 (0:00:00.607) 0:00:20.964 *********** 2025-06-22 12:09:52.940843 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:52.940851 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:52.940860 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:52.940869 | orchestrator | 2025-06-22 12:09:52.940878 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-06-22 12:09:52.940886 | orchestrator | Sunday 22 June 2025 12:07:38 +0000 (0:00:00.464) 0:00:21.428 *********** 2025-06-22 12:09:52.940922 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:52.940932 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:52.940940 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:52.940949 | orchestrator | 2025-06-22 12:09:52.940957 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-06-22 12:09:52.940966 | orchestrator | Sunday 22 June 2025 12:07:39 +0000 (0:00:00.316) 0:00:21.745 *********** 2025-06-22 12:09:52.940975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:09:52.940991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 12:09:52.941006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:09:52.941021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 12:09:52.941031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:09:52.941041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 12:09:52.941058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 12:09:52.941067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 12:09:52.941082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 12:09:52.941092 | orchestrator | 2025-06-22 12:09:52.941101 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 12:09:52.941109 | orchestrator | Sunday 22 June 2025 12:07:41 +0000 (0:00:02.346) 0:00:24.091 *********** 2025-06-22 12:09:52.941119 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:52.941127 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:52.941136 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:52.941145 | orchestrator | 2025-06-22 12:09:52.941153 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-06-22 12:09:52.941166 | orchestrator | Sunday 22 June 2025 12:07:41 +0000 (0:00:00.299) 0:00:24.391 *********** 2025-06-22 12:09:52.941175 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-22 12:09:52.941184 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-22 12:09:52.941192 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-22 12:09:52.941201 | orchestrator | 2025-06-22 12:09:52.941210 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-06-22 12:09:52.941218 | orchestrator | Sunday 22 June 2025 12:07:43 +0000 (0:00:02.075) 0:00:26.466 *********** 2025-06-22 12:09:52.941227 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 12:09:52.941236 | orchestrator | 2025-06-22 12:09:52.941244 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-06-22 12:09:52.941253 | orchestrator | Sunday 22 June 2025 12:07:44 +0000 (0:00:00.950) 0:00:27.417 *********** 2025-06-22 12:09:52.941267 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:52.941275 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:52.941284 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:52.941293 | orchestrator | 2025-06-22 12:09:52.941301 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-06-22 12:09:52.941310 | orchestrator | Sunday 22 June 2025 12:07:45 +0000 (0:00:00.534) 0:00:27.951 *********** 2025-06-22 12:09:52.941319 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 12:09:52.941328 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-22 12:09:52.941336 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-22 12:09:52.941345 | orchestrator | 2025-06-22 12:09:52.941354 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-06-22 12:09:52.941363 | orchestrator | Sunday 22 June 2025 12:07:46 +0000 (0:00:01.021) 0:00:28.973 *********** 2025-06-22 12:09:52.941371 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:09:52.941380 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:09:52.941389 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:09:52.941398 | orchestrator | 2025-06-22 12:09:52.941406 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-06-22 12:09:52.941415 | orchestrator | Sunday 22 June 2025 12:07:46 +0000 (0:00:00.280) 0:00:29.254 *********** 2025-06-22 12:09:52.941424 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-22 12:09:52.941432 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-22 12:09:52.941441 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-22 12:09:52.941449 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-22 12:09:52.941458 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-22 12:09:52.941467 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-22 12:09:52.941475 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-22 12:09:52.941484 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-22 12:09:52.941493 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-22 12:09:52.941501 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-22 12:09:52.941510 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-22 12:09:52.941518 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-22 12:09:52.941527 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-22 12:09:52.941535 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-22 12:09:52.941550 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-22 12:09:52.941559 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 12:09:52.941568 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 12:09:52.941576 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 12:09:52.941585 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 12:09:52.941594 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 12:09:52.941602 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 12:09:52.941616 | orchestrator | 2025-06-22 12:09:52.941625 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-06-22 12:09:52.941633 | orchestrator | Sunday 22 June 2025 12:07:55 +0000 (0:00:08.915) 0:00:38.169 *********** 2025-06-22 12:09:52.941642 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 12:09:52.941650 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 12:09:52.941663 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 12:09:52.941671 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 12:09:52.941680 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 12:09:52.941689 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 12:09:52.941697 | orchestrator | 2025-06-22 12:09:52.941706 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-06-22 12:09:52.941715 | orchestrator | Sunday 22 June 2025 12:07:58 +0000 (0:00:02.652) 0:00:40.822 *********** 2025-06-22 12:09:52.941724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:09:52.941734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:09:52.941751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 12:09:52.941770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 12:09:52.941779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 12:09:52.941789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 12:09:52.941798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 12:09:52.941807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 12:09:52.941821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 12:09:52.941835 | orchestrator | 2025-06-22 12:09:52.941844 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 12:09:52.941853 | orchestrator | Sunday 22 June 2025 12:08:00 +0000 (0:00:02.289) 0:00:43.112 *********** 2025-06-22 12:09:52.941862 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:52.941871 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:52.941879 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:52.941888 | orchestrator | 2025-06-22 12:09:52.941943 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-06-22 12:09:52.941955 | orchestrator | Sunday 22 June 2025 12:08:00 +0000 (0:00:00.294) 0:00:43.407 *********** 2025-06-22 12:09:52.941969 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:09:52.941984 | orchestrator | 2025-06-22 12:09:52.941999 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-06-22 12:09:52.942094 | orchestrator | Sunday 22 June 2025 12:08:03 +0000 (0:00:02.309) 0:00:45.716 *********** 2025-06-22 12:09:52.942115 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:09:52.942129 | orchestrator | 2025-06-22 12:09:52.942143 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-06-22 12:09:52.942158 | orchestrator | Sunday 22 June 2025 12:08:05 +0000 (0:00:02.735) 0:00:48.451 *********** 2025-06-22 12:09:52.942179 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:09:52.942195 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:09:52.942211 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:09:52.942226 | orchestrator | 2025-06-22 12:09:52.942241 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-06-22 12:09:52.942250 | orchestrator | Sunday 22 June 2025 12:08:06 +0000 (0:00:00.843) 0:00:49.295 *********** 2025-06-22 12:09:52.942259 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:09:52.942267 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:09:52.942276 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:09:52.942285 | orchestrator | 2025-06-22 12:09:52.942293 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-06-22 12:09:52.942302 | orchestrator | Sunday 22 June 2025 12:08:07 +0000 (0:00:00.316) 0:00:49.612 *********** 2025-06-22 12:09:52.942311 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:52.942319 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:52.942328 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:52.942337 | orchestrator | 2025-06-22 12:09:52.942345 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-06-22 12:09:52.942354 | orchestrator | Sunday 22 June 2025 12:08:07 +0000 (0:00:00.365) 0:00:49.977 *********** 2025-06-22 12:09:52.942363 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:09:52.942371 | orchestrator | 2025-06-22 12:09:52.942380 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-06-22 12:09:52.942389 | orchestrator | Sunday 22 June 2025 12:08:20 +0000 (0:00:13.352) 0:01:03.330 *********** 2025-06-22 12:09:52.942398 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:09:52.942406 | orchestrator | 2025-06-22 12:09:52.942415 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-22 12:09:52.942424 | orchestrator | Sunday 22 June 2025 12:08:30 +0000 (0:00:09.515) 0:01:12.845 *********** 2025-06-22 12:09:52.942432 | orchestrator | 2025-06-22 12:09:52.942441 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-22 12:09:52.942450 | orchestrator | Sunday 22 June 2025 12:08:30 +0000 (0:00:00.263) 0:01:13.108 *********** 2025-06-22 12:09:52.942458 | orchestrator | 2025-06-22 12:09:52.942467 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-22 12:09:52.942475 | orchestrator | Sunday 22 June 2025 12:08:30 +0000 (0:00:00.065) 0:01:13.173 *********** 2025-06-22 12:09:52.942484 | orchestrator | 2025-06-22 12:09:52.942493 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-06-22 12:09:52.942510 | orchestrator | Sunday 22 June 2025 12:08:30 +0000 (0:00:00.059) 0:01:13.233 *********** 2025-06-22 12:09:52.942519 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:09:52.942528 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:09:52.942537 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:09:52.942545 | orchestrator | 2025-06-22 12:09:52.942554 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-06-22 12:09:52.942563 | orchestrator | Sunday 22 June 2025 12:08:49 +0000 (0:00:19.198) 0:01:32.432 *********** 2025-06-22 12:09:52.942571 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:09:52.942580 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:09:52.942589 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:09:52.942598 | orchestrator | 2025-06-22 12:09:52.942606 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-06-22 12:09:52.942615 | orchestrator | Sunday 22 June 2025 12:08:55 +0000 (0:00:05.295) 0:01:37.728 *********** 2025-06-22 12:09:52.942624 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:09:52.942632 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:09:52.942641 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:09:52.942650 | orchestrator | 2025-06-22 12:09:52.942658 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 12:09:52.942667 | orchestrator | Sunday 22 June 2025 12:09:03 +0000 (0:00:07.929) 0:01:45.657 *********** 2025-06-22 12:09:52.942676 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:09:52.942685 | orchestrator | 2025-06-22 12:09:52.942693 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-06-22 12:09:52.942702 | orchestrator | Sunday 22 June 2025 12:09:03 +0000 (0:00:00.828) 0:01:46.486 *********** 2025-06-22 12:09:52.942711 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:09:52.942720 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:09:52.942728 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:09:52.942737 | orchestrator | 2025-06-22 12:09:52.942746 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-06-22 12:09:52.942755 | orchestrator | Sunday 22 June 2025 12:09:04 +0000 (0:00:00.753) 0:01:47.240 *********** 2025-06-22 12:09:52.942763 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:09:52.942772 | orchestrator | 2025-06-22 12:09:52.942790 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-06-22 12:09:52.942799 | orchestrator | Sunday 22 June 2025 12:09:06 +0000 (0:00:01.770) 0:01:49.011 *********** 2025-06-22 12:09:52.942808 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-06-22 12:09:52.942816 | orchestrator | 2025-06-22 12:09:52.942825 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-06-22 12:09:52.942833 | orchestrator | Sunday 22 June 2025 12:09:17 +0000 (0:00:10.949) 0:01:59.960 *********** 2025-06-22 12:09:52.942842 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-06-22 12:09:52.942850 | orchestrator | 2025-06-22 12:09:52.942859 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-06-22 12:09:52.942867 | orchestrator | Sunday 22 June 2025 12:09:38 +0000 (0:00:20.932) 0:02:20.893 *********** 2025-06-22 12:09:52.942876 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-06-22 12:09:52.942885 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-06-22 12:09:52.942912 | orchestrator | 2025-06-22 12:09:52.942921 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-06-22 12:09:52.942930 | orchestrator | Sunday 22 June 2025 12:09:44 +0000 (0:00:06.335) 0:02:27.228 *********** 2025-06-22 12:09:52.942943 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:52.942952 | orchestrator | 2025-06-22 12:09:52.942961 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-06-22 12:09:52.942969 | orchestrator | Sunday 22 June 2025 12:09:45 +0000 (0:00:00.351) 0:02:27.580 *********** 2025-06-22 12:09:52.942984 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:52.942993 | orchestrator | 2025-06-22 12:09:52.943002 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-06-22 12:09:52.943010 | orchestrator | Sunday 22 June 2025 12:09:45 +0000 (0:00:00.129) 0:02:27.710 *********** 2025-06-22 12:09:52.943019 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:52.943027 | orchestrator | 2025-06-22 12:09:52.943036 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-06-22 12:09:52.943045 | orchestrator | Sunday 22 June 2025 12:09:45 +0000 (0:00:00.154) 0:02:27.864 *********** 2025-06-22 12:09:52.943053 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:52.943062 | orchestrator | 2025-06-22 12:09:52.943070 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-06-22 12:09:52.943079 | orchestrator | Sunday 22 June 2025 12:09:45 +0000 (0:00:00.316) 0:02:28.181 *********** 2025-06-22 12:09:52.943087 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:09:52.943096 | orchestrator | 2025-06-22 12:09:52.943104 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 12:09:52.943113 | orchestrator | Sunday 22 June 2025 12:09:49 +0000 (0:00:03.940) 0:02:32.121 *********** 2025-06-22 12:09:52.943122 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:09:52.943130 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:09:52.943139 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:09:52.943147 | orchestrator | 2025-06-22 12:09:52.943156 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:09:52.943165 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-06-22 12:09:52.943174 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-22 12:09:52.943183 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-22 12:09:52.943192 | orchestrator | 2025-06-22 12:09:52.943200 | orchestrator | 2025-06-22 12:09:52.943209 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:09:52.943217 | orchestrator | Sunday 22 June 2025 12:09:50 +0000 (0:00:00.669) 0:02:32.790 *********** 2025-06-22 12:09:52.943226 | orchestrator | =============================================================================== 2025-06-22 12:09:52.943234 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.93s 2025-06-22 12:09:52.943243 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 19.20s 2025-06-22 12:09:52.943251 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.35s 2025-06-22 12:09:52.943260 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.95s 2025-06-22 12:09:52.943268 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.52s 2025-06-22 12:09:52.943277 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.92s 2025-06-22 12:09:52.943285 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.93s 2025-06-22 12:09:52.943294 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.34s 2025-06-22 12:09:52.943302 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.30s 2025-06-22 12:09:52.943311 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.10s 2025-06-22 12:09:52.943320 | orchestrator | keystone : Creating default user role ----------------------------------- 3.94s 2025-06-22 12:09:52.943328 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.79s 2025-06-22 12:09:52.943337 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.48s 2025-06-22 12:09:52.943345 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.74s 2025-06-22 12:09:52.943359 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.65s 2025-06-22 12:09:52.943373 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.35s 2025-06-22 12:09:52.943382 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.31s 2025-06-22 12:09:52.943390 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.29s 2025-06-22 12:09:52.943399 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.08s 2025-06-22 12:09:52.943407 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.77s 2025-06-22 12:09:52.943416 | orchestrator | 2025-06-22 12:09:52 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:09:52.943424 | orchestrator | 2025-06-22 12:09:52 | INFO  | Task 9e8094a8-290d-4468-b47d-d390c90538f0 is in state STARTED 2025-06-22 12:09:52.943433 | orchestrator | 2025-06-22 12:09:52 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:09:52.943441 | orchestrator | 2025-06-22 12:09:52 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:09:52.943450 | orchestrator | 2025-06-22 12:09:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:55.970743 | orchestrator | 2025-06-22 12:09:55 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:09:55.970830 | orchestrator | 2025-06-22 12:09:55 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:09:55.973483 | orchestrator | 2025-06-22 12:09:55 | INFO  | Task 9e8094a8-290d-4468-b47d-d390c90538f0 is in state STARTED 2025-06-22 12:09:55.973826 | orchestrator | 2025-06-22 12:09:55 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:09:55.974423 | orchestrator | 2025-06-22 12:09:55 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:09:55.974449 | orchestrator | 2025-06-22 12:09:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:09:59.016793 | orchestrator | 2025-06-22 12:09:59 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:09:59.017478 | orchestrator | 2025-06-22 12:09:59 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:09:59.018760 | orchestrator | 2025-06-22 12:09:59 | INFO  | Task 9e8094a8-290d-4468-b47d-d390c90538f0 is in state SUCCESS 2025-06-22 12:09:59.020603 | orchestrator | 2025-06-22 12:09:59 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:09:59.021065 | orchestrator | 2025-06-22 12:09:59 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:09:59.022860 | orchestrator | 2025-06-22 12:09:59 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:09:59.024145 | orchestrator | 2025-06-22 12:09:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:02.052582 | orchestrator | 2025-06-22 12:10:02 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:02.052670 | orchestrator | 2025-06-22 12:10:02 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:02.060199 | orchestrator | 2025-06-22 12:10:02 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:02.060226 | orchestrator | 2025-06-22 12:10:02 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:02.060238 | orchestrator | 2025-06-22 12:10:02 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:02.060249 | orchestrator | 2025-06-22 12:10:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:05.092043 | orchestrator | 2025-06-22 12:10:05 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:05.093363 | orchestrator | 2025-06-22 12:10:05 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:05.094405 | orchestrator | 2025-06-22 12:10:05 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:05.095529 | orchestrator | 2025-06-22 12:10:05 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:05.096572 | orchestrator | 2025-06-22 12:10:05 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:05.096826 | orchestrator | 2025-06-22 12:10:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:08.139502 | orchestrator | 2025-06-22 12:10:08 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:08.141209 | orchestrator | 2025-06-22 12:10:08 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:08.143299 | orchestrator | 2025-06-22 12:10:08 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:08.144367 | orchestrator | 2025-06-22 12:10:08 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:08.145878 | orchestrator | 2025-06-22 12:10:08 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:08.145918 | orchestrator | 2025-06-22 12:10:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:11.193686 | orchestrator | 2025-06-22 12:10:11 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:11.195690 | orchestrator | 2025-06-22 12:10:11 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:11.197330 | orchestrator | 2025-06-22 12:10:11 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:11.199177 | orchestrator | 2025-06-22 12:10:11 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:11.201060 | orchestrator | 2025-06-22 12:10:11 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:11.201085 | orchestrator | 2025-06-22 12:10:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:14.239667 | orchestrator | 2025-06-22 12:10:14 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:14.240150 | orchestrator | 2025-06-22 12:10:14 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:14.241044 | orchestrator | 2025-06-22 12:10:14 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:14.241775 | orchestrator | 2025-06-22 12:10:14 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:14.242881 | orchestrator | 2025-06-22 12:10:14 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:14.242952 | orchestrator | 2025-06-22 12:10:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:17.290176 | orchestrator | 2025-06-22 12:10:17 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:17.291934 | orchestrator | 2025-06-22 12:10:17 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:17.294462 | orchestrator | 2025-06-22 12:10:17 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:17.296537 | orchestrator | 2025-06-22 12:10:17 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:17.298917 | orchestrator | 2025-06-22 12:10:17 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:17.298943 | orchestrator | 2025-06-22 12:10:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:20.389628 | orchestrator | 2025-06-22 12:10:20 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:20.390126 | orchestrator | 2025-06-22 12:10:20 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:20.390771 | orchestrator | 2025-06-22 12:10:20 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:20.391500 | orchestrator | 2025-06-22 12:10:20 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:20.392451 | orchestrator | 2025-06-22 12:10:20 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:20.392474 | orchestrator | 2025-06-22 12:10:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:23.446624 | orchestrator | 2025-06-22 12:10:23 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:23.446719 | orchestrator | 2025-06-22 12:10:23 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:23.448196 | orchestrator | 2025-06-22 12:10:23 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:23.448831 | orchestrator | 2025-06-22 12:10:23 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:23.450594 | orchestrator | 2025-06-22 12:10:23 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:23.450620 | orchestrator | 2025-06-22 12:10:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:26.486127 | orchestrator | 2025-06-22 12:10:26 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:26.486224 | orchestrator | 2025-06-22 12:10:26 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:26.487212 | orchestrator | 2025-06-22 12:10:26 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:26.487245 | orchestrator | 2025-06-22 12:10:26 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:26.489915 | orchestrator | 2025-06-22 12:10:26 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:26.489958 | orchestrator | 2025-06-22 12:10:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:29.514346 | orchestrator | 2025-06-22 12:10:29 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:29.514456 | orchestrator | 2025-06-22 12:10:29 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:29.514779 | orchestrator | 2025-06-22 12:10:29 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:29.515267 | orchestrator | 2025-06-22 12:10:29 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:29.515769 | orchestrator | 2025-06-22 12:10:29 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:29.515804 | orchestrator | 2025-06-22 12:10:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:32.572545 | orchestrator | 2025-06-22 12:10:32 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:32.572634 | orchestrator | 2025-06-22 12:10:32 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:32.574777 | orchestrator | 2025-06-22 12:10:32 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:32.577070 | orchestrator | 2025-06-22 12:10:32 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:32.578558 | orchestrator | 2025-06-22 12:10:32 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:32.578981 | orchestrator | 2025-06-22 12:10:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:35.610751 | orchestrator | 2025-06-22 12:10:35 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:35.610963 | orchestrator | 2025-06-22 12:10:35 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:35.611763 | orchestrator | 2025-06-22 12:10:35 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:35.612475 | orchestrator | 2025-06-22 12:10:35 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:35.613163 | orchestrator | 2025-06-22 12:10:35 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:35.613428 | orchestrator | 2025-06-22 12:10:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:38.647253 | orchestrator | 2025-06-22 12:10:38 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:38.647426 | orchestrator | 2025-06-22 12:10:38 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:38.647453 | orchestrator | 2025-06-22 12:10:38 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:38.648022 | orchestrator | 2025-06-22 12:10:38 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:38.649548 | orchestrator | 2025-06-22 12:10:38 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:38.649572 | orchestrator | 2025-06-22 12:10:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:41.680304 | orchestrator | 2025-06-22 12:10:41 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:41.682914 | orchestrator | 2025-06-22 12:10:41 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:41.683275 | orchestrator | 2025-06-22 12:10:41 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:41.683864 | orchestrator | 2025-06-22 12:10:41 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:41.684606 | orchestrator | 2025-06-22 12:10:41 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:41.684629 | orchestrator | 2025-06-22 12:10:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:44.711747 | orchestrator | 2025-06-22 12:10:44 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:44.711855 | orchestrator | 2025-06-22 12:10:44 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:44.712547 | orchestrator | 2025-06-22 12:10:44 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:44.713062 | orchestrator | 2025-06-22 12:10:44 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:44.713681 | orchestrator | 2025-06-22 12:10:44 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:44.713738 | orchestrator | 2025-06-22 12:10:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:47.737110 | orchestrator | 2025-06-22 12:10:47 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:47.737221 | orchestrator | 2025-06-22 12:10:47 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:47.737366 | orchestrator | 2025-06-22 12:10:47 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:47.737918 | orchestrator | 2025-06-22 12:10:47 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:47.738500 | orchestrator | 2025-06-22 12:10:47 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:47.738523 | orchestrator | 2025-06-22 12:10:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:50.761602 | orchestrator | 2025-06-22 12:10:50 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:50.765015 | orchestrator | 2025-06-22 12:10:50 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:50.765506 | orchestrator | 2025-06-22 12:10:50 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:50.766123 | orchestrator | 2025-06-22 12:10:50 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:50.766797 | orchestrator | 2025-06-22 12:10:50 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:50.766818 | orchestrator | 2025-06-22 12:10:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:53.817958 | orchestrator | 2025-06-22 12:10:53 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:53.818367 | orchestrator | 2025-06-22 12:10:53 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:53.818756 | orchestrator | 2025-06-22 12:10:53 | INFO  | Task 8b94429a-97a7-4cb5-9645-055dc74e2db9 is in state STARTED 2025-06-22 12:10:53.819433 | orchestrator | 2025-06-22 12:10:53 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:53.820120 | orchestrator | 2025-06-22 12:10:53 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:53.820692 | orchestrator | 2025-06-22 12:10:53 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:53.820820 | orchestrator | 2025-06-22 12:10:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:56.858667 | orchestrator | 2025-06-22 12:10:56 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:56.859045 | orchestrator | 2025-06-22 12:10:56 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:56.859989 | orchestrator | 2025-06-22 12:10:56 | INFO  | Task 8b94429a-97a7-4cb5-9645-055dc74e2db9 is in state STARTED 2025-06-22 12:10:56.860699 | orchestrator | 2025-06-22 12:10:56 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:56.861692 | orchestrator | 2025-06-22 12:10:56 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:56.862450 | orchestrator | 2025-06-22 12:10:56 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:56.862735 | orchestrator | 2025-06-22 12:10:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:10:59.893390 | orchestrator | 2025-06-22 12:10:59 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:10:59.894091 | orchestrator | 2025-06-22 12:10:59 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:10:59.897481 | orchestrator | 2025-06-22 12:10:59 | INFO  | Task 8b94429a-97a7-4cb5-9645-055dc74e2db9 is in state STARTED 2025-06-22 12:10:59.897546 | orchestrator | 2025-06-22 12:10:59 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:10:59.897560 | orchestrator | 2025-06-22 12:10:59 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:10:59.897571 | orchestrator | 2025-06-22 12:10:59 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:10:59.897583 | orchestrator | 2025-06-22 12:10:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:11:02.931000 | orchestrator | 2025-06-22 12:11:02 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:11:02.931439 | orchestrator | 2025-06-22 12:11:02 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:11:02.933320 | orchestrator | 2025-06-22 12:11:02 | INFO  | Task 8b94429a-97a7-4cb5-9645-055dc74e2db9 is in state STARTED 2025-06-22 12:11:02.933842 | orchestrator | 2025-06-22 12:11:02 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:11:02.934577 | orchestrator | 2025-06-22 12:11:02 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:11:02.935271 | orchestrator | 2025-06-22 12:11:02 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:11:02.935465 | orchestrator | 2025-06-22 12:11:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:11:05.965019 | orchestrator | 2025-06-22 12:11:05 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:11:05.965484 | orchestrator | 2025-06-22 12:11:05 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:11:05.966157 | orchestrator | 2025-06-22 12:11:05 | INFO  | Task 8b94429a-97a7-4cb5-9645-055dc74e2db9 is in state STARTED 2025-06-22 12:11:05.977194 | orchestrator | 2025-06-22 12:11:05 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:11:05.977496 | orchestrator | 2025-06-22 12:11:05 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:11:05.978007 | orchestrator | 2025-06-22 12:11:05 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:11:05.978078 | orchestrator | 2025-06-22 12:11:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:11:09.031395 | orchestrator | 2025-06-22 12:11:09 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:11:09.031727 | orchestrator | 2025-06-22 12:11:09 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:11:09.032246 | orchestrator | 2025-06-22 12:11:09 | INFO  | Task 8b94429a-97a7-4cb5-9645-055dc74e2db9 is in state SUCCESS 2025-06-22 12:11:09.032998 | orchestrator | 2025-06-22 12:11:09 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:11:09.033632 | orchestrator | 2025-06-22 12:11:09 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:11:09.034339 | orchestrator | 2025-06-22 12:11:09 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:11:09.034650 | orchestrator | 2025-06-22 12:11:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:11:12.056114 | orchestrator | 2025-06-22 12:11:12 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:11:12.057458 | orchestrator | 2025-06-22 12:11:12 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:11:12.058010 | orchestrator | 2025-06-22 12:11:12 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:11:12.058677 | orchestrator | 2025-06-22 12:11:12 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:11:12.060121 | orchestrator | 2025-06-22 12:11:12 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:11:12.060144 | orchestrator | 2025-06-22 12:11:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:11:15.082544 | orchestrator | 2025-06-22 12:11:15 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:11:15.084568 | orchestrator | 2025-06-22 12:11:15 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:11:15.086079 | orchestrator | 2025-06-22 12:11:15 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:11:15.086112 | orchestrator | 2025-06-22 12:11:15 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:11:15.086476 | orchestrator | 2025-06-22 12:11:15 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:11:15.086624 | orchestrator | 2025-06-22 12:11:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:11:18.127872 | orchestrator | 2025-06-22 12:11:18 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:11:18.127947 | orchestrator | 2025-06-22 12:11:18 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:11:18.127958 | orchestrator | 2025-06-22 12:11:18 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:11:18.128159 | orchestrator | 2025-06-22 12:11:18 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:11:18.128769 | orchestrator | 2025-06-22 12:11:18 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:11:18.128924 | orchestrator | 2025-06-22 12:11:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:11:21.157843 | orchestrator | 2025-06-22 12:11:21 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state STARTED 2025-06-22 12:11:21.157991 | orchestrator | 2025-06-22 12:11:21 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:11:21.158008 | orchestrator | 2025-06-22 12:11:21 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:11:21.158241 | orchestrator | 2025-06-22 12:11:21 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:11:21.161779 | orchestrator | 2025-06-22 12:11:21 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:11:21.161807 | orchestrator | 2025-06-22 12:11:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:11:24.186158 | orchestrator | 2025-06-22 12:11:24 | INFO  | Task ff428187-6934-4848-a511-b5a8b07eb955 is in state SUCCESS 2025-06-22 12:11:24.186255 | orchestrator | 2025-06-22 12:11:24 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:11:24.186943 | orchestrator | 2025-06-22 12:11:24 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:11:24.187835 | orchestrator | 2025-06-22 12:11:24 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:11:24.188358 | orchestrator | 2025-06-22 12:11:24 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:11:24.188455 | orchestrator | 2025-06-22 12:11:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:11:27.221769 | orchestrator | 2025-06-22 12:11:27 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:11:27.221930 | orchestrator | 2025-06-22 12:11:27 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:11:27.221947 | orchestrator | 2025-06-22 12:11:27 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:11:27.222145 | orchestrator | 2025-06-22 12:11:27 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:11:27.222167 | orchestrator | 2025-06-22 12:11:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:11:30.246353 | orchestrator | 2025-06-22 12:11:30 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:11:30.246605 | orchestrator | 2025-06-22 12:11:30 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:11:30.247081 | orchestrator | 2025-06-22 12:11:30 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:11:30.247672 | orchestrator | 2025-06-22 12:11:30 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:11:30.247696 | orchestrator | 2025-06-22 12:11:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:11:33.278650 | orchestrator | 2025-06-22 12:11:33 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:11:33.278738 | orchestrator | 2025-06-22 12:11:33 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:11:33.278753 | orchestrator | 2025-06-22 12:11:33 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:11:33.279159 | orchestrator | 2025-06-22 12:11:33 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:11:33.279973 | orchestrator | 2025-06-22 12:11:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:11:36.306127 | orchestrator | 2025-06-22 12:11:36 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:11:36.307209 | orchestrator | 2025-06-22 12:11:36 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:11:36.307667 | orchestrator | 2025-06-22 12:11:36 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:11:36.308430 | orchestrator | 2025-06-22 12:11:36 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:11:36.308501 | orchestrator | 2025-06-22 12:11:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:11:39.343339 | orchestrator | 2025-06-22 12:11:39 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:11:39.343822 | orchestrator | 2025-06-22 12:11:39 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:11:39.345692 | orchestrator | 2025-06-22 12:11:39 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:11:39.346267 | orchestrator | 2025-06-22 12:11:39 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:11:39.346304 | orchestrator | 2025-06-22 12:11:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:11:42.375217 | orchestrator | 2025-06-22 12:11:42 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:11:42.376642 | orchestrator | 2025-06-22 12:11:42 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:11:42.377708 | orchestrator | 2025-06-22 12:11:42 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:11:42.378351 | orchestrator | 2025-06-22 12:11:42 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:11:42.378421 | orchestrator | 2025-06-22 12:11:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:11:45.406808 | orchestrator | 2025-06-22 12:11:45 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:11:45.408332 | orchestrator | 2025-06-22 12:11:45 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:11:45.409628 | orchestrator | 2025-06-22 12:11:45 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:11:45.410326 | orchestrator | 2025-06-22 12:11:45 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:11:45.410352 | orchestrator | 2025-06-22 12:11:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:11:48.436345 | orchestrator | 2025-06-22 12:11:48 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:11:48.437427 | orchestrator | 2025-06-22 12:11:48 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:11:48.438005 | orchestrator | 2025-06-22 12:11:48 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:11:48.440144 | orchestrator | 2025-06-22 12:11:48 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:11:48.440167 | orchestrator | 2025-06-22 12:11:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:11:51.486393 | orchestrator | 2025-06-22 12:11:51 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:11:51.486601 | orchestrator | 2025-06-22 12:11:51 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:11:51.487475 | orchestrator | 2025-06-22 12:11:51 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:11:51.488527 | orchestrator | 2025-06-22 12:11:51 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:11:51.488607 | orchestrator | 2025-06-22 12:11:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:11:54.533117 | orchestrator | 2025-06-22 12:11:54 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:11:54.535560 | orchestrator | 2025-06-22 12:11:54 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:11:54.536584 | orchestrator | 2025-06-22 12:11:54 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:11:54.538596 | orchestrator | 2025-06-22 12:11:54 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:11:54.538679 | orchestrator | 2025-06-22 12:11:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:11:57.586589 | orchestrator | 2025-06-22 12:11:57 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:11:57.587037 | orchestrator | 2025-06-22 12:11:57 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:11:57.588083 | orchestrator | 2025-06-22 12:11:57 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:11:57.589028 | orchestrator | 2025-06-22 12:11:57 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:11:57.589052 | orchestrator | 2025-06-22 12:11:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:00.632667 | orchestrator | 2025-06-22 12:12:00 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state STARTED 2025-06-22 12:12:00.633133 | orchestrator | 2025-06-22 12:12:00 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:00.633779 | orchestrator | 2025-06-22 12:12:00 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:00.636140 | orchestrator | 2025-06-22 12:12:00 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:00.636182 | orchestrator | 2025-06-22 12:12:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:03.685997 | orchestrator | 2025-06-22 12:12:03 | INFO  | Task bf3cd6c9-b452-4840-91eb-f538e5aa32e0 is in state SUCCESS 2025-06-22 12:12:03.687066 | orchestrator | 2025-06-22 12:12:03.687101 | orchestrator | 2025-06-22 12:12:03.687115 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:12:03.687127 | orchestrator | 2025-06-22 12:12:03.687138 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:12:03.687149 | orchestrator | Sunday 22 June 2025 12:09:54 +0000 (0:00:00.290) 0:00:00.290 *********** 2025-06-22 12:12:03.687160 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:12:03.687172 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:12:03.687278 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:12:03.687318 | orchestrator | 2025-06-22 12:12:03.687332 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:12:03.687343 | orchestrator | Sunday 22 June 2025 12:09:55 +0000 (0:00:00.386) 0:00:00.677 *********** 2025-06-22 12:12:03.687354 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-22 12:12:03.687365 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-22 12:12:03.687376 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-22 12:12:03.687387 | orchestrator | 2025-06-22 12:12:03.687398 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-06-22 12:12:03.687409 | orchestrator | 2025-06-22 12:12:03.687419 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-06-22 12:12:03.687430 | orchestrator | Sunday 22 June 2025 12:09:55 +0000 (0:00:00.780) 0:00:01.457 *********** 2025-06-22 12:12:03.687441 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:12:03.687452 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:12:03.687463 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:12:03.687474 | orchestrator | 2025-06-22 12:12:03.687484 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:12:03.687496 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:12:03.687507 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:12:03.687518 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:12:03.687529 | orchestrator | 2025-06-22 12:12:03.687540 | orchestrator | 2025-06-22 12:12:03.687551 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:12:03.687562 | orchestrator | Sunday 22 June 2025 12:09:56 +0000 (0:00:00.889) 0:00:02.347 *********** 2025-06-22 12:12:03.687573 | orchestrator | =============================================================================== 2025-06-22 12:12:03.687584 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.89s 2025-06-22 12:12:03.687595 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.78s 2025-06-22 12:12:03.687606 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2025-06-22 12:12:03.687617 | orchestrator | 2025-06-22 12:12:03.687628 | orchestrator | None 2025-06-22 12:12:03.687639 | orchestrator | 2025-06-22 12:12:03.687650 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-06-22 12:12:03.687662 | orchestrator | 2025-06-22 12:12:03.687675 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-06-22 12:12:03.687688 | orchestrator | Sunday 22 June 2025 12:09:53 +0000 (0:00:00.222) 0:00:00.222 *********** 2025-06-22 12:12:03.687701 | orchestrator | changed: [testbed-manager] 2025-06-22 12:12:03.687714 | orchestrator | 2025-06-22 12:12:03.687727 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-06-22 12:12:03.687760 | orchestrator | Sunday 22 June 2025 12:09:55 +0000 (0:00:01.188) 0:00:01.411 *********** 2025-06-22 12:12:03.687773 | orchestrator | changed: [testbed-manager] 2025-06-22 12:12:03.687785 | orchestrator | 2025-06-22 12:12:03.687798 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-06-22 12:12:03.687810 | orchestrator | Sunday 22 June 2025 12:09:56 +0000 (0:00:00.939) 0:00:02.350 *********** 2025-06-22 12:12:03.687824 | orchestrator | changed: [testbed-manager] 2025-06-22 12:12:03.687836 | orchestrator | 2025-06-22 12:12:03.687869 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-06-22 12:12:03.687882 | orchestrator | Sunday 22 June 2025 12:09:56 +0000 (0:00:00.847) 0:00:03.197 *********** 2025-06-22 12:12:03.687894 | orchestrator | changed: [testbed-manager] 2025-06-22 12:12:03.687907 | orchestrator | 2025-06-22 12:12:03.687919 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-06-22 12:12:03.687932 | orchestrator | Sunday 22 June 2025 12:09:57 +0000 (0:00:00.982) 0:00:04.179 *********** 2025-06-22 12:12:03.687945 | orchestrator | changed: [testbed-manager] 2025-06-22 12:12:03.687957 | orchestrator | 2025-06-22 12:12:03.687969 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-06-22 12:12:03.687982 | orchestrator | Sunday 22 June 2025 12:09:58 +0000 (0:00:00.747) 0:00:04.927 *********** 2025-06-22 12:12:03.687994 | orchestrator | changed: [testbed-manager] 2025-06-22 12:12:03.688008 | orchestrator | 2025-06-22 12:12:03.688020 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-06-22 12:12:03.688049 | orchestrator | Sunday 22 June 2025 12:09:59 +0000 (0:00:00.875) 0:00:05.803 *********** 2025-06-22 12:12:03.688061 | orchestrator | changed: [testbed-manager] 2025-06-22 12:12:03.688072 | orchestrator | 2025-06-22 12:12:03.688083 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-06-22 12:12:03.688094 | orchestrator | Sunday 22 June 2025 12:10:00 +0000 (0:00:01.240) 0:00:07.044 *********** 2025-06-22 12:12:03.688104 | orchestrator | changed: [testbed-manager] 2025-06-22 12:12:03.688115 | orchestrator | 2025-06-22 12:12:03.688126 | orchestrator | TASK [Create admin user] ******************************************************* 2025-06-22 12:12:03.688136 | orchestrator | Sunday 22 June 2025 12:10:01 +0000 (0:00:01.121) 0:00:08.165 *********** 2025-06-22 12:12:03.688147 | orchestrator | changed: [testbed-manager] 2025-06-22 12:12:03.688158 | orchestrator | 2025-06-22 12:12:03.688168 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-06-22 12:12:03.688187 | orchestrator | Sunday 22 June 2025 12:10:56 +0000 (0:00:55.129) 0:01:03.294 *********** 2025-06-22 12:12:03.688211 | orchestrator | skipping: [testbed-manager] 2025-06-22 12:12:03.688223 | orchestrator | 2025-06-22 12:12:03.688233 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-22 12:12:03.688244 | orchestrator | 2025-06-22 12:12:03.688255 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-22 12:12:03.688266 | orchestrator | Sunday 22 June 2025 12:10:57 +0000 (0:00:00.178) 0:01:03.473 *********** 2025-06-22 12:12:03.688276 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:12:03.688287 | orchestrator | 2025-06-22 12:12:03.688298 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-22 12:12:03.688309 | orchestrator | 2025-06-22 12:12:03.688320 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-22 12:12:03.688331 | orchestrator | Sunday 22 June 2025 12:11:08 +0000 (0:00:11.580) 0:01:15.054 *********** 2025-06-22 12:12:03.688341 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:12:03.688352 | orchestrator | 2025-06-22 12:12:03.688363 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-22 12:12:03.688373 | orchestrator | 2025-06-22 12:12:03.688384 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-22 12:12:03.688395 | orchestrator | Sunday 22 June 2025 12:11:09 +0000 (0:00:01.251) 0:01:16.305 *********** 2025-06-22 12:12:03.688424 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:12:03.688436 | orchestrator | 2025-06-22 12:12:03.688446 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:12:03.688457 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 12:12:03.688469 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:12:03.688480 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:12:03.688491 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:12:03.688502 | orchestrator | 2025-06-22 12:12:03.688513 | orchestrator | 2025-06-22 12:12:03.688524 | orchestrator | 2025-06-22 12:12:03.688535 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:12:03.688545 | orchestrator | Sunday 22 June 2025 12:11:21 +0000 (0:00:11.186) 0:01:27.491 *********** 2025-06-22 12:12:03.688556 | orchestrator | =============================================================================== 2025-06-22 12:12:03.688567 | orchestrator | Create admin user ------------------------------------------------------ 55.13s 2025-06-22 12:12:03.688578 | orchestrator | Restart ceph manager service ------------------------------------------- 24.02s 2025-06-22 12:12:03.688588 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.24s 2025-06-22 12:12:03.688599 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.19s 2025-06-22 12:12:03.688610 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.12s 2025-06-22 12:12:03.688620 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.98s 2025-06-22 12:12:03.688631 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.94s 2025-06-22 12:12:03.688642 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.88s 2025-06-22 12:12:03.688652 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.85s 2025-06-22 12:12:03.688663 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.75s 2025-06-22 12:12:03.688674 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2025-06-22 12:12:03.688684 | orchestrator | 2025-06-22 12:12:03.688695 | orchestrator | 2025-06-22 12:12:03.688706 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:12:03.688716 | orchestrator | 2025-06-22 12:12:03.688727 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:12:03.688737 | orchestrator | Sunday 22 June 2025 12:09:55 +0000 (0:00:00.413) 0:00:00.413 *********** 2025-06-22 12:12:03.688748 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:12:03.688759 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:12:03.688769 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:12:03.688780 | orchestrator | 2025-06-22 12:12:03.688791 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:12:03.688802 | orchestrator | Sunday 22 June 2025 12:09:56 +0000 (0:00:00.493) 0:00:00.907 *********** 2025-06-22 12:12:03.688812 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-06-22 12:12:03.688834 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-06-22 12:12:03.688865 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-06-22 12:12:03.688876 | orchestrator | 2025-06-22 12:12:03.688888 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-06-22 12:12:03.688898 | orchestrator | 2025-06-22 12:12:03.688909 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-22 12:12:03.688920 | orchestrator | Sunday 22 June 2025 12:09:56 +0000 (0:00:00.489) 0:00:01.397 *********** 2025-06-22 12:12:03.688938 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:12:03.688949 | orchestrator | 2025-06-22 12:12:03.688959 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-06-22 12:12:03.688970 | orchestrator | Sunday 22 June 2025 12:09:57 +0000 (0:00:00.613) 0:00:02.011 *********** 2025-06-22 12:12:03.688981 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-06-22 12:12:03.688992 | orchestrator | 2025-06-22 12:12:03.689008 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-06-22 12:12:03.689026 | orchestrator | Sunday 22 June 2025 12:10:01 +0000 (0:00:04.287) 0:00:06.298 *********** 2025-06-22 12:12:03.689038 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-06-22 12:12:03.689049 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-06-22 12:12:03.689060 | orchestrator | 2025-06-22 12:12:03.689071 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-06-22 12:12:03.689082 | orchestrator | Sunday 22 June 2025 12:10:08 +0000 (0:00:06.965) 0:00:13.264 *********** 2025-06-22 12:12:03.689092 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-06-22 12:12:03.689103 | orchestrator | 2025-06-22 12:12:03.689114 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-06-22 12:12:03.689125 | orchestrator | Sunday 22 June 2025 12:10:11 +0000 (0:00:03.366) 0:00:16.630 *********** 2025-06-22 12:12:03.689136 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 12:12:03.689146 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-06-22 12:12:03.689157 | orchestrator | 2025-06-22 12:12:03.689168 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-06-22 12:12:03.689178 | orchestrator | Sunday 22 June 2025 12:10:15 +0000 (0:00:04.004) 0:00:20.634 *********** 2025-06-22 12:12:03.689189 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 12:12:03.689200 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-06-22 12:12:03.689211 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-06-22 12:12:03.689222 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-06-22 12:12:03.689232 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-06-22 12:12:03.689243 | orchestrator | 2025-06-22 12:12:03.689254 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-06-22 12:12:03.689265 | orchestrator | Sunday 22 June 2025 12:10:31 +0000 (0:00:15.836) 0:00:36.471 *********** 2025-06-22 12:12:03.689276 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-06-22 12:12:03.689286 | orchestrator | 2025-06-22 12:12:03.689297 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-06-22 12:12:03.689308 | orchestrator | Sunday 22 June 2025 12:10:36 +0000 (0:00:04.503) 0:00:40.975 *********** 2025-06-22 12:12:03.689322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 12:12:03.689337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 12:12:03.689369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 12:12:03.689382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.689396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.689408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.689420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.689438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.689470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.689484 | orchestrator | 2025-06-22 12:12:03.689505 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-06-22 12:12:03.689517 | orchestrator | Sunday 22 June 2025 12:10:37 +0000 (0:00:01.606) 0:00:42.581 *********** 2025-06-22 12:12:03.689528 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-06-22 12:12:03.689539 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-06-22 12:12:03.689550 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-06-22 12:12:03.689561 | orchestrator | 2025-06-22 12:12:03.689571 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-06-22 12:12:03.689582 | orchestrator | Sunday 22 June 2025 12:10:38 +0000 (0:00:01.125) 0:00:43.706 *********** 2025-06-22 12:12:03.689593 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:12:03.689604 | orchestrator | 2025-06-22 12:12:03.689615 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-06-22 12:12:03.689625 | orchestrator | Sunday 22 June 2025 12:10:39 +0000 (0:00:00.240) 0:00:43.947 *********** 2025-06-22 12:12:03.689636 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:12:03.689647 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:12:03.689658 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:12:03.689668 | orchestrator | 2025-06-22 12:12:03.689679 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-22 12:12:03.689690 | orchestrator | Sunday 22 June 2025 12:10:40 +0000 (0:00:00.799) 0:00:44.746 *********** 2025-06-22 12:12:03.689701 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:12:03.689712 | orchestrator | 2025-06-22 12:12:03.689722 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-06-22 12:12:03.689733 | orchestrator | Sunday 22 June 2025 12:10:40 +0000 (0:00:00.803) 0:00:45.550 *********** 2025-06-22 12:12:03.689745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 12:12:03.689763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 12:12:03.689785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 12:12:03.689798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.689810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.689821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.689839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.689878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.689890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.689901 | orchestrator | 2025-06-22 12:12:03.689912 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-06-22 12:12:03.689934 | orchestrator | Sunday 22 June 2025 12:10:44 +0000 (0:00:03.849) 0:00:49.399 *********** 2025-06-22 12:12:03.689953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 12:12:03.689965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:12:03.689983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:12:03.689995 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:12:03.690006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 12:12:03.690059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:12:03.690088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:12:03.690101 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:12:03.690113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 12:12:03.690131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:12:03.690143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:12:03.690154 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:12:03.690165 | orchestrator | 2025-06-22 12:12:03.690176 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-06-22 12:12:03.690187 | orchestrator | Sunday 22 June 2025 12:10:45 +0000 (0:00:01.308) 0:00:50.708 *********** 2025-06-22 12:12:03.690199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 12:12:03.690655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:12:03.690734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:12:03.690773 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:12:03.690789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 12:12:03.690801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:12:03.690813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:12:03.690825 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:12:03.690893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 12:12:03.690909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:12:03.690929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:12:03.690941 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:12:03.690952 | orchestrator | 2025-06-22 12:12:03.690964 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-06-22 12:12:03.690976 | orchestrator | Sunday 22 June 2025 12:10:47 +0000 (0:00:01.118) 0:00:51.827 *********** 2025-06-22 12:12:03.690987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 12:12:03.690999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 12:12:03.691024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 12:12:03.691036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.691055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.691067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.691079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.691090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.691106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.691118 | orchestrator | 2025-06-22 12:12:03.691135 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-06-22 12:12:03.691147 | orchestrator | Sunday 22 June 2025 12:10:50 +0000 (0:00:03.546) 0:00:55.374 *********** 2025-06-22 12:12:03.691165 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:12:03.691176 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:12:03.691189 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:12:03.691202 | orchestrator | 2025-06-22 12:12:03.691214 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-06-22 12:12:03.691226 | orchestrator | Sunday 22 June 2025 12:10:54 +0000 (0:00:03.680) 0:00:59.054 *********** 2025-06-22 12:12:03.691239 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 12:12:03.691251 | orchestrator | 2025-06-22 12:12:03.691264 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-06-22 12:12:03.691276 | orchestrator | Sunday 22 June 2025 12:10:56 +0000 (0:00:02.018) 0:01:01.072 *********** 2025-06-22 12:12:03.691289 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:12:03.691301 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:12:03.691313 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:12:03.691326 | orchestrator | 2025-06-22 12:12:03.691339 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-06-22 12:12:03.691351 | orchestrator | Sunday 22 June 2025 12:10:58 +0000 (0:00:02.075) 0:01:03.147 *********** 2025-06-22 12:12:03.691365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 12:12:03.691379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 12:12:03.691393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 12:12:03.691422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.691462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.691476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.691490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.691504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.691517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.691530 | orchestrator | 2025-06-22 12:12:03.691542 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-06-22 12:12:03.691560 | orchestrator | Sunday 22 June 2025 12:11:10 +0000 (0:00:11.612) 0:01:14.760 *********** 2025-06-22 12:12:03.691583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 12:12:03.691596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:12:03.691608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:12:03.691619 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:12:03.691631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 12:12:03.691642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:12:03.691660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:12:03.691672 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:12:03.691694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 12:12:03.691706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 12:12:03.691718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:12:03.691730 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:12:03.691741 | orchestrator | 2025-06-22 12:12:03.691752 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-06-22 12:12:03.691763 | orchestrator | Sunday 22 June 2025 12:11:11 +0000 (0:00:01.184) 0:01:15.944 *********** 2025-06-22 12:12:03.691774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 12:12:03.691807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 12:12:03.691820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 12:12:03.691832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.691843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.691903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.691936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.691963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.691975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:12:03.691986 | orchestrator | 2025-06-22 12:12:03.691997 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-22 12:12:03.692008 | orchestrator | Sunday 22 June 2025 12:11:14 +0000 (0:00:03.547) 0:01:19.492 *********** 2025-06-22 12:12:03.692019 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:12:03.692030 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:12:03.692041 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:12:03.692051 | orchestrator | 2025-06-22 12:12:03.692062 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-06-22 12:12:03.692073 | orchestrator | Sunday 22 June 2025 12:11:15 +0000 (0:00:00.563) 0:01:20.055 *********** 2025-06-22 12:12:03.692084 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:12:03.692094 | orchestrator | 2025-06-22 12:12:03.692105 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-06-22 12:12:03.692117 | orchestrator | Sunday 22 June 2025 12:11:17 +0000 (0:00:02.561) 0:01:22.617 *********** 2025-06-22 12:12:03.692128 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:12:03.692138 | orchestrator | 2025-06-22 12:12:03.692149 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-06-22 12:12:03.692160 | orchestrator | Sunday 22 June 2025 12:11:20 +0000 (0:00:02.647) 0:01:25.264 *********** 2025-06-22 12:12:03.692171 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:12:03.692181 | orchestrator | 2025-06-22 12:12:03.692192 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-22 12:12:03.692203 | orchestrator | Sunday 22 June 2025 12:11:32 +0000 (0:00:12.423) 0:01:37.688 *********** 2025-06-22 12:12:03.692213 | orchestrator | 2025-06-22 12:12:03.692224 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-22 12:12:03.692235 | orchestrator | Sunday 22 June 2025 12:11:33 +0000 (0:00:00.052) 0:01:37.740 *********** 2025-06-22 12:12:03.692245 | orchestrator | 2025-06-22 12:12:03.692256 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-22 12:12:03.692279 | orchestrator | Sunday 22 June 2025 12:11:33 +0000 (0:00:00.054) 0:01:37.795 *********** 2025-06-22 12:12:03.692305 | orchestrator | 2025-06-22 12:12:03.692327 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-06-22 12:12:03.692343 | orchestrator | Sunday 22 June 2025 12:11:33 +0000 (0:00:00.049) 0:01:37.845 *********** 2025-06-22 12:12:03.692360 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:12:03.692377 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:12:03.692394 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:12:03.692411 | orchestrator | 2025-06-22 12:12:03.692427 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-06-22 12:12:03.692442 | orchestrator | Sunday 22 June 2025 12:11:40 +0000 (0:00:07.431) 0:01:45.276 *********** 2025-06-22 12:12:03.692458 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:12:03.692474 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:12:03.692491 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:12:03.692509 | orchestrator | 2025-06-22 12:12:03.692527 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-06-22 12:12:03.692545 | orchestrator | Sunday 22 June 2025 12:11:48 +0000 (0:00:08.338) 0:01:53.614 *********** 2025-06-22 12:12:03.692562 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:12:03.692579 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:12:03.692598 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:12:03.692618 | orchestrator | 2025-06-22 12:12:03.692636 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:12:03.692652 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 12:12:03.692664 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 12:12:03.692675 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 12:12:03.692686 | orchestrator | 2025-06-22 12:12:03.692697 | orchestrator | 2025-06-22 12:12:03.692708 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:12:03.692719 | orchestrator | Sunday 22 June 2025 12:12:00 +0000 (0:00:11.779) 0:02:05.394 *********** 2025-06-22 12:12:03.692729 | orchestrator | =============================================================================== 2025-06-22 12:12:03.692740 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.84s 2025-06-22 12:12:03.692751 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.42s 2025-06-22 12:12:03.692769 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.78s 2025-06-22 12:12:03.692790 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.61s 2025-06-22 12:12:03.692801 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 8.34s 2025-06-22 12:12:03.692812 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.43s 2025-06-22 12:12:03.692823 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.97s 2025-06-22 12:12:03.692834 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.50s 2025-06-22 12:12:03.692869 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.29s 2025-06-22 12:12:03.692883 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.00s 2025-06-22 12:12:03.692894 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.85s 2025-06-22 12:12:03.692905 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.68s 2025-06-22 12:12:03.692915 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.55s 2025-06-22 12:12:03.692926 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.55s 2025-06-22 12:12:03.692948 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.37s 2025-06-22 12:12:03.692959 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.64s 2025-06-22 12:12:03.692970 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.57s 2025-06-22 12:12:03.692981 | orchestrator | barbican : Copying over barbican-api-paste.ini -------------------------- 2.08s 2025-06-22 12:12:03.692991 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.02s 2025-06-22 12:12:03.693002 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.61s 2025-06-22 12:12:03.693013 | orchestrator | 2025-06-22 12:12:03 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:12:03.693025 | orchestrator | 2025-06-22 12:12:03 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:03.693036 | orchestrator | 2025-06-22 12:12:03 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:03.693047 | orchestrator | 2025-06-22 12:12:03 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:03.693058 | orchestrator | 2025-06-22 12:12:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:06.729489 | orchestrator | 2025-06-22 12:12:06 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:12:06.729969 | orchestrator | 2025-06-22 12:12:06 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:06.730398 | orchestrator | 2025-06-22 12:12:06 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:06.731172 | orchestrator | 2025-06-22 12:12:06 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:06.731195 | orchestrator | 2025-06-22 12:12:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:09.789177 | orchestrator | 2025-06-22 12:12:09 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:12:09.789264 | orchestrator | 2025-06-22 12:12:09 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:09.791789 | orchestrator | 2025-06-22 12:12:09 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:09.792445 | orchestrator | 2025-06-22 12:12:09 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:09.792471 | orchestrator | 2025-06-22 12:12:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:12.831260 | orchestrator | 2025-06-22 12:12:12 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:12:12.831358 | orchestrator | 2025-06-22 12:12:12 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:12.831375 | orchestrator | 2025-06-22 12:12:12 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:12.833065 | orchestrator | 2025-06-22 12:12:12 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:12.833100 | orchestrator | 2025-06-22 12:12:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:15.866572 | orchestrator | 2025-06-22 12:12:15 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:12:15.866942 | orchestrator | 2025-06-22 12:12:15 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:15.867581 | orchestrator | 2025-06-22 12:12:15 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:15.868262 | orchestrator | 2025-06-22 12:12:15 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:15.868308 | orchestrator | 2025-06-22 12:12:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:18.903233 | orchestrator | 2025-06-22 12:12:18 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:12:18.903386 | orchestrator | 2025-06-22 12:12:18 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:18.903869 | orchestrator | 2025-06-22 12:12:18 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:18.904434 | orchestrator | 2025-06-22 12:12:18 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:18.904467 | orchestrator | 2025-06-22 12:12:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:21.930284 | orchestrator | 2025-06-22 12:12:21 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:12:21.930436 | orchestrator | 2025-06-22 12:12:21 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:21.932092 | orchestrator | 2025-06-22 12:12:21 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:21.932501 | orchestrator | 2025-06-22 12:12:21 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:21.933376 | orchestrator | 2025-06-22 12:12:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:24.963791 | orchestrator | 2025-06-22 12:12:24 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:12:24.964942 | orchestrator | 2025-06-22 12:12:24 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:24.966389 | orchestrator | 2025-06-22 12:12:24 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:24.967708 | orchestrator | 2025-06-22 12:12:24 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:24.967740 | orchestrator | 2025-06-22 12:12:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:28.018389 | orchestrator | 2025-06-22 12:12:28 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:12:28.018475 | orchestrator | 2025-06-22 12:12:28 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:28.022151 | orchestrator | 2025-06-22 12:12:28 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:28.022482 | orchestrator | 2025-06-22 12:12:28 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:28.022504 | orchestrator | 2025-06-22 12:12:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:31.072240 | orchestrator | 2025-06-22 12:12:31 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:12:31.072326 | orchestrator | 2025-06-22 12:12:31 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:31.073244 | orchestrator | 2025-06-22 12:12:31 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:31.073679 | orchestrator | 2025-06-22 12:12:31 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:31.073781 | orchestrator | 2025-06-22 12:12:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:34.104769 | orchestrator | 2025-06-22 12:12:34 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:12:34.105085 | orchestrator | 2025-06-22 12:12:34 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:34.105860 | orchestrator | 2025-06-22 12:12:34 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:34.108067 | orchestrator | 2025-06-22 12:12:34 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:34.108103 | orchestrator | 2025-06-22 12:12:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:37.137612 | orchestrator | 2025-06-22 12:12:37 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:12:37.139069 | orchestrator | 2025-06-22 12:12:37 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:37.141301 | orchestrator | 2025-06-22 12:12:37 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:37.143271 | orchestrator | 2025-06-22 12:12:37 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:37.144312 | orchestrator | 2025-06-22 12:12:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:40.190531 | orchestrator | 2025-06-22 12:12:40 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:12:40.193123 | orchestrator | 2025-06-22 12:12:40 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:40.195370 | orchestrator | 2025-06-22 12:12:40 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:40.197403 | orchestrator | 2025-06-22 12:12:40 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:40.198454 | orchestrator | 2025-06-22 12:12:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:43.249973 | orchestrator | 2025-06-22 12:12:43 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:12:43.253249 | orchestrator | 2025-06-22 12:12:43 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:43.257316 | orchestrator | 2025-06-22 12:12:43 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:43.259616 | orchestrator | 2025-06-22 12:12:43 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:43.259673 | orchestrator | 2025-06-22 12:12:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:46.318135 | orchestrator | 2025-06-22 12:12:46 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:12:46.318238 | orchestrator | 2025-06-22 12:12:46 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:46.318717 | orchestrator | 2025-06-22 12:12:46 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:46.320310 | orchestrator | 2025-06-22 12:12:46 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:46.320338 | orchestrator | 2025-06-22 12:12:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:49.363445 | orchestrator | 2025-06-22 12:12:49 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:12:49.363798 | orchestrator | 2025-06-22 12:12:49 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:49.367529 | orchestrator | 2025-06-22 12:12:49 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:49.370416 | orchestrator | 2025-06-22 12:12:49 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:49.370538 | orchestrator | 2025-06-22 12:12:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:52.418177 | orchestrator | 2025-06-22 12:12:52 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:12:52.419449 | orchestrator | 2025-06-22 12:12:52 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:52.424235 | orchestrator | 2025-06-22 12:12:52 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:52.425902 | orchestrator | 2025-06-22 12:12:52 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:52.425931 | orchestrator | 2025-06-22 12:12:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:55.469797 | orchestrator | 2025-06-22 12:12:55 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:12:55.471465 | orchestrator | 2025-06-22 12:12:55 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:55.472538 | orchestrator | 2025-06-22 12:12:55 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:55.473640 | orchestrator | 2025-06-22 12:12:55 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:55.473883 | orchestrator | 2025-06-22 12:12:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:12:58.515357 | orchestrator | 2025-06-22 12:12:58 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:12:58.516454 | orchestrator | 2025-06-22 12:12:58 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:12:58.517927 | orchestrator | 2025-06-22 12:12:58 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:12:58.519118 | orchestrator | 2025-06-22 12:12:58 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:12:58.519161 | orchestrator | 2025-06-22 12:12:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:01.565994 | orchestrator | 2025-06-22 12:13:01 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:01.570441 | orchestrator | 2025-06-22 12:13:01 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:01.572097 | orchestrator | 2025-06-22 12:13:01 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:13:01.574648 | orchestrator | 2025-06-22 12:13:01 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:13:01.575238 | orchestrator | 2025-06-22 12:13:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:04.623974 | orchestrator | 2025-06-22 12:13:04 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:04.625870 | orchestrator | 2025-06-22 12:13:04 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:04.627646 | orchestrator | 2025-06-22 12:13:04 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state STARTED 2025-06-22 12:13:04.629435 | orchestrator | 2025-06-22 12:13:04 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:13:04.629529 | orchestrator | 2025-06-22 12:13:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:07.676944 | orchestrator | 2025-06-22 12:13:07 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:07.677203 | orchestrator | 2025-06-22 12:13:07 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:07.686624 | orchestrator | 2025-06-22 12:13:07.686678 | orchestrator | 2025-06-22 12:13:07.686691 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:13:07.686704 | orchestrator | 2025-06-22 12:13:07.686715 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:13:07.686753 | orchestrator | Sunday 22 June 2025 12:10:01 +0000 (0:00:00.207) 0:00:00.207 *********** 2025-06-22 12:13:07.686960 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:13:07.686984 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:13:07.687001 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:13:07.687020 | orchestrator | 2025-06-22 12:13:07.687040 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:13:07.687060 | orchestrator | Sunday 22 June 2025 12:10:01 +0000 (0:00:00.220) 0:00:00.428 *********** 2025-06-22 12:13:07.687081 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-06-22 12:13:07.687102 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-06-22 12:13:07.687121 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-06-22 12:13:07.687142 | orchestrator | 2025-06-22 12:13:07.687224 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-06-22 12:13:07.687239 | orchestrator | 2025-06-22 12:13:07.687252 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-22 12:13:07.687264 | orchestrator | Sunday 22 June 2025 12:10:01 +0000 (0:00:00.297) 0:00:00.725 *********** 2025-06-22 12:13:07.687277 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:13:07.687294 | orchestrator | 2025-06-22 12:13:07.687315 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-06-22 12:13:07.687334 | orchestrator | Sunday 22 June 2025 12:10:02 +0000 (0:00:00.511) 0:00:01.237 *********** 2025-06-22 12:13:07.687347 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-06-22 12:13:07.687360 | orchestrator | 2025-06-22 12:13:07.687372 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-06-22 12:13:07.687384 | orchestrator | Sunday 22 June 2025 12:10:05 +0000 (0:00:03.416) 0:00:04.654 *********** 2025-06-22 12:13:07.687396 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-06-22 12:13:07.687408 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-06-22 12:13:07.687421 | orchestrator | 2025-06-22 12:13:07.687433 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-06-22 12:13:07.687450 | orchestrator | Sunday 22 June 2025 12:10:13 +0000 (0:00:07.313) 0:00:11.967 *********** 2025-06-22 12:13:07.687471 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 12:13:07.687490 | orchestrator | 2025-06-22 12:13:07.687505 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-06-22 12:13:07.687517 | orchestrator | Sunday 22 June 2025 12:10:16 +0000 (0:00:03.291) 0:00:15.259 *********** 2025-06-22 12:13:07.687529 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 12:13:07.687541 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-06-22 12:13:07.687554 | orchestrator | 2025-06-22 12:13:07.687566 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-06-22 12:13:07.687577 | orchestrator | Sunday 22 June 2025 12:10:20 +0000 (0:00:03.644) 0:00:18.904 *********** 2025-06-22 12:13:07.687588 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 12:13:07.687666 | orchestrator | 2025-06-22 12:13:07.687678 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-06-22 12:13:07.687688 | orchestrator | Sunday 22 June 2025 12:10:24 +0000 (0:00:04.183) 0:00:23.088 *********** 2025-06-22 12:13:07.687699 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-06-22 12:13:07.687710 | orchestrator | 2025-06-22 12:13:07.687737 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-06-22 12:13:07.687812 | orchestrator | Sunday 22 June 2025 12:10:29 +0000 (0:00:04.915) 0:00:28.003 *********** 2025-06-22 12:13:07.687957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 12:13:07.688024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 12:13:07.688040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 12:13:07.688113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688551 | orchestrator | 2025-06-22 12:13:07.688573 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-06-22 12:13:07.688591 | orchestrator | Sunday 22 June 2025 12:10:32 +0000 (0:00:03.611) 0:00:31.614 *********** 2025-06-22 12:13:07.688603 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:13:07.688614 | orchestrator | 2025-06-22 12:13:07.688630 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-06-22 12:13:07.688642 | orchestrator | Sunday 22 June 2025 12:10:32 +0000 (0:00:00.099) 0:00:31.713 *********** 2025-06-22 12:13:07.688653 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:13:07.688663 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:13:07.688674 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:13:07.688685 | orchestrator | 2025-06-22 12:13:07.688696 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-22 12:13:07.688707 | orchestrator | Sunday 22 June 2025 12:10:33 +0000 (0:00:00.282) 0:00:31.996 *********** 2025-06-22 12:13:07.688717 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:13:07.688728 | orchestrator | 2025-06-22 12:13:07.688739 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-06-22 12:13:07.688750 | orchestrator | Sunday 22 June 2025 12:10:33 +0000 (0:00:00.583) 0:00:32.579 *********** 2025-06-22 12:13:07.688769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 12:13:07.688782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 12:13:07.688794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 12:13:07.688875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.688989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.689000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.689012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.689031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.689043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.689054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.689104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.689140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.689152 | orchestrator | 2025-06-22 12:13:07.689163 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-06-22 12:13:07.689174 | orchestrator | Sunday 22 June 2025 12:10:40 +0000 (0:00:06.650) 0:00:39.230 *********** 2025-06-22 12:13:07.689186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 12:13:07.689204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 12:13:07.689216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.689247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.689267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.689286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.689298 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:13:07.689309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 12:13:07.690218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 12:13:07.690264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690352 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:13:07.690370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 12:13:07.690400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 12:13:07.690419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690478 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:13:07.690488 | orchestrator | 2025-06-22 12:13:07.690498 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-06-22 12:13:07.690508 | orchestrator | Sunday 22 June 2025 12:10:41 +0000 (0:00:01.558) 0:00:40.788 *********** 2025-06-22 12:13:07.690518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 12:13:07.690534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 12:13:07.690551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690596 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:13:07.690606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 12:13:07.690622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 12:13:07.690638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690699 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:13:07.690713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 12:13:07.690731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 12:13:07.690748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.690793 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:13:07.690805 | orchestrator | 2025-06-22 12:13:07.690817 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-06-22 12:13:07.690854 | orchestrator | Sunday 22 June 2025 12:10:44 +0000 (0:00:02.106) 0:00:42.895 *********** 2025-06-22 12:13:07.690873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 12:13:07.690912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 12:13:07.690933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 12:13:07.690945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.690962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.690974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.690992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691194 | orchestrator | 2025-06-22 12:13:07.691204 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-06-22 12:13:07.691214 | orchestrator | Sunday 22 June 2025 12:10:49 +0000 (0:00:05.856) 0:00:48.752 *********** 2025-06-22 12:13:07.691224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 12:13:07.691246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 12:13:07.691257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 12:13:07.691267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691591 | orchestrator | 2025-06-22 12:13:07.691601 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-06-22 12:13:07.691615 | orchestrator | Sunday 22 June 2025 12:11:14 +0000 (0:00:24.460) 0:01:13.212 *********** 2025-06-22 12:13:07.691635 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-22 12:13:07.691645 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-22 12:13:07.691654 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-22 12:13:07.691664 | orchestrator | 2025-06-22 12:13:07.691673 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-06-22 12:13:07.691683 | orchestrator | Sunday 22 June 2025 12:11:21 +0000 (0:00:06.791) 0:01:20.004 *********** 2025-06-22 12:13:07.691692 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-22 12:13:07.691702 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-22 12:13:07.691711 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-22 12:13:07.691721 | orchestrator | 2025-06-22 12:13:07.691730 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-06-22 12:13:07.691740 | orchestrator | Sunday 22 June 2025 12:11:25 +0000 (0:00:04.253) 0:01:24.257 *********** 2025-06-22 12:13:07.691756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 12:13:07.691767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 12:13:07.691777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 12:13:07.691792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.691819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.691859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.691870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.691890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.691911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.691922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.691950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.691960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.691970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.691990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.692001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.692019 | orchestrator | 2025-06-22 12:13:07.692036 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-06-22 12:13:07.692052 | orchestrator | Sunday 22 June 2025 12:11:28 +0000 (0:00:03.413) 0:01:27.671 *********** 2025-06-22 12:13:07.692079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 12:13:07.692096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 12:13:07.692112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 12:13:07.692129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.692147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.692195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.692310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.692359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.692374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.692384 | orchestrator | 2025-06-22 12:13:07.692394 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-22 12:13:07.692403 | orchestrator | Sunday 22 June 2025 12:11:32 +0000 (0:00:03.353) 0:01:31.025 *********** 2025-06-22 12:13:07.692413 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:13:07.692423 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:13:07.692432 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:13:07.692442 | orchestrator | 2025-06-22 12:13:07.692451 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-06-22 12:13:07.692461 | orchestrator | Sunday 22 June 2025 12:11:32 +0000 (0:00:00.777) 0:01:31.802 *********** 2025-06-22 12:13:07.692477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 12:13:07.692488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 12:13:07.692503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692557 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:13:07.692573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 12:13:07.692583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 12:13:07.692601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692667 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:13:07.692683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 12:13:07.692693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 12:13:07.692721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 12:13:07.692794 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:13:07.692812 | orchestrator | 2025-06-22 12:13:07.692898 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-06-22 12:13:07.692912 | orchestrator | Sunday 22 June 2025 12:11:33 +0000 (0:00:00.959) 0:01:32.762 *********** 2025-06-22 12:13:07.692931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 12:13:07.692950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 12:13:07.692961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 12:13:07.692976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.692985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.692993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 12:13:07.693006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.693020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.693028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.693036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.693048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.693057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.693069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.693086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.693094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.693102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.693111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.693123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 12:13:07.693131 | orchestrator | 2025-06-22 12:13:07.693139 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-22 12:13:07.693147 | orchestrator | Sunday 22 June 2025 12:11:39 +0000 (0:00:05.679) 0:01:38.441 *********** 2025-06-22 12:13:07.693155 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:13:07.693163 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:13:07.693171 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:13:07.693179 | orchestrator | 2025-06-22 12:13:07.693186 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-06-22 12:13:07.693194 | orchestrator | Sunday 22 June 2025 12:11:39 +0000 (0:00:00.353) 0:01:38.795 *********** 2025-06-22 12:13:07.693208 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-06-22 12:13:07.693216 | orchestrator | 2025-06-22 12:13:07.693224 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-06-22 12:13:07.693232 | orchestrator | Sunday 22 June 2025 12:11:42 +0000 (0:00:02.669) 0:01:41.465 *********** 2025-06-22 12:13:07.693242 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 12:13:07.693255 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-06-22 12:13:07.693263 | orchestrator | 2025-06-22 12:13:07.693272 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-06-22 12:13:07.693283 | orchestrator | Sunday 22 June 2025 12:11:45 +0000 (0:00:02.425) 0:01:43.891 *********** 2025-06-22 12:13:07.693291 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:13:07.693299 | orchestrator | 2025-06-22 12:13:07.693307 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-22 12:13:07.693315 | orchestrator | Sunday 22 June 2025 12:11:59 +0000 (0:00:14.635) 0:01:58.527 *********** 2025-06-22 12:13:07.693323 | orchestrator | 2025-06-22 12:13:07.693331 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-22 12:13:07.693338 | orchestrator | Sunday 22 June 2025 12:11:59 +0000 (0:00:00.145) 0:01:58.672 *********** 2025-06-22 12:13:07.693346 | orchestrator | 2025-06-22 12:13:07.693354 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-22 12:13:07.693362 | orchestrator | Sunday 22 June 2025 12:11:59 +0000 (0:00:00.166) 0:01:58.839 *********** 2025-06-22 12:13:07.693369 | orchestrator | 2025-06-22 12:13:07.693377 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-06-22 12:13:07.693385 | orchestrator | Sunday 22 June 2025 12:12:00 +0000 (0:00:00.158) 0:01:58.997 *********** 2025-06-22 12:13:07.693392 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:13:07.693400 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:13:07.693408 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:13:07.693416 | orchestrator | 2025-06-22 12:13:07.693424 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-06-22 12:13:07.693432 | orchestrator | Sunday 22 June 2025 12:12:16 +0000 (0:00:16.042) 0:02:15.040 *********** 2025-06-22 12:13:07.693439 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:13:07.693447 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:13:07.693455 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:13:07.693463 | orchestrator | 2025-06-22 12:13:07.693470 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-06-22 12:13:07.693478 | orchestrator | Sunday 22 June 2025 12:12:27 +0000 (0:00:11.647) 0:02:26.687 *********** 2025-06-22 12:13:07.693486 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:13:07.693494 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:13:07.693501 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:13:07.693509 | orchestrator | 2025-06-22 12:13:07.693517 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-06-22 12:13:07.693525 | orchestrator | Sunday 22 June 2025 12:12:39 +0000 (0:00:11.746) 0:02:38.434 *********** 2025-06-22 12:13:07.693533 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:13:07.693540 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:13:07.693548 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:13:07.693556 | orchestrator | 2025-06-22 12:13:07.693564 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-06-22 12:13:07.693572 | orchestrator | Sunday 22 June 2025 12:12:46 +0000 (0:00:06.486) 0:02:44.920 *********** 2025-06-22 12:13:07.693579 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:13:07.693587 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:13:07.693595 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:13:07.693603 | orchestrator | 2025-06-22 12:13:07.693611 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-06-22 12:13:07.693618 | orchestrator | Sunday 22 June 2025 12:12:52 +0000 (0:00:06.279) 0:02:51.200 *********** 2025-06-22 12:13:07.693631 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:13:07.693639 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:13:07.693647 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:13:07.693655 | orchestrator | 2025-06-22 12:13:07.693662 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-06-22 12:13:07.693670 | orchestrator | Sunday 22 June 2025 12:12:58 +0000 (0:00:06.164) 0:02:57.364 *********** 2025-06-22 12:13:07.693678 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:13:07.693686 | orchestrator | 2025-06-22 12:13:07.693694 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:13:07.693702 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 12:13:07.693715 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 12:13:07.693724 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 12:13:07.693732 | orchestrator | 2025-06-22 12:13:07.693739 | orchestrator | 2025-06-22 12:13:07.693747 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:13:07.693755 | orchestrator | Sunday 22 June 2025 12:13:06 +0000 (0:00:08.038) 0:03:05.402 *********** 2025-06-22 12:13:07.693763 | orchestrator | =============================================================================== 2025-06-22 12:13:07.693771 | orchestrator | designate : Copying over designate.conf -------------------------------- 24.46s 2025-06-22 12:13:07.693778 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 16.04s 2025-06-22 12:13:07.693786 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.64s 2025-06-22 12:13:07.693794 | orchestrator | designate : Restart designate-central container ------------------------ 11.75s 2025-06-22 12:13:07.693802 | orchestrator | designate : Restart designate-api container ---------------------------- 11.65s 2025-06-22 12:13:07.693809 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.04s 2025-06-22 12:13:07.693817 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.31s 2025-06-22 12:13:07.693843 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.79s 2025-06-22 12:13:07.693852 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.65s 2025-06-22 12:13:07.693860 | orchestrator | designate : Restart designate-producer container ------------------------ 6.49s 2025-06-22 12:13:07.693872 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.28s 2025-06-22 12:13:07.693880 | orchestrator | designate : Restart designate-worker container -------------------------- 6.16s 2025-06-22 12:13:07.693888 | orchestrator | designate : Copying over config.json files for services ----------------- 5.86s 2025-06-22 12:13:07.693895 | orchestrator | designate : Check designate containers ---------------------------------- 5.68s 2025-06-22 12:13:07.693903 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.92s 2025-06-22 12:13:07.693911 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.25s 2025-06-22 12:13:07.693919 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 4.18s 2025-06-22 12:13:07.693927 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.64s 2025-06-22 12:13:07.693935 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.61s 2025-06-22 12:13:07.693942 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.42s 2025-06-22 12:13:07.693950 | orchestrator | 2025-06-22 12:13:07 | INFO  | Task 4f536c5e-c371-433c-8ead-158f327db7c2 is in state SUCCESS 2025-06-22 12:13:07.693959 | orchestrator | 2025-06-22 12:13:07 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:13:07.693972 | orchestrator | 2025-06-22 12:13:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:10.741237 | orchestrator | 2025-06-22 12:13:10 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:13:10.743037 | orchestrator | 2025-06-22 12:13:10 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:10.744922 | orchestrator | 2025-06-22 12:13:10 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:10.747045 | orchestrator | 2025-06-22 12:13:10 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:13:10.747081 | orchestrator | 2025-06-22 12:13:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:13.810748 | orchestrator | 2025-06-22 12:13:13 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:13:13.813309 | orchestrator | 2025-06-22 12:13:13 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:13.815600 | orchestrator | 2025-06-22 12:13:13 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:13.817240 | orchestrator | 2025-06-22 12:13:13 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:13:13.817273 | orchestrator | 2025-06-22 12:13:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:16.876441 | orchestrator | 2025-06-22 12:13:16 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:13:16.878432 | orchestrator | 2025-06-22 12:13:16 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:16.880793 | orchestrator | 2025-06-22 12:13:16 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:16.883402 | orchestrator | 2025-06-22 12:13:16 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:13:16.883504 | orchestrator | 2025-06-22 12:13:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:19.931519 | orchestrator | 2025-06-22 12:13:19 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:13:19.933622 | orchestrator | 2025-06-22 12:13:19 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:19.936132 | orchestrator | 2025-06-22 12:13:19 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:19.938115 | orchestrator | 2025-06-22 12:13:19 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:13:19.938343 | orchestrator | 2025-06-22 12:13:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:22.985177 | orchestrator | 2025-06-22 12:13:22 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:13:22.986760 | orchestrator | 2025-06-22 12:13:22 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:22.988308 | orchestrator | 2025-06-22 12:13:22 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:22.989865 | orchestrator | 2025-06-22 12:13:22 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:13:22.990398 | orchestrator | 2025-06-22 12:13:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:26.034925 | orchestrator | 2025-06-22 12:13:26 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:13:26.035539 | orchestrator | 2025-06-22 12:13:26 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:26.040441 | orchestrator | 2025-06-22 12:13:26 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:26.041984 | orchestrator | 2025-06-22 12:13:26 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state STARTED 2025-06-22 12:13:26.042063 | orchestrator | 2025-06-22 12:13:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:29.109351 | orchestrator | 2025-06-22 12:13:29 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:13:29.109725 | orchestrator | 2025-06-22 12:13:29 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:29.110506 | orchestrator | 2025-06-22 12:13:29 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:29.111150 | orchestrator | 2025-06-22 12:13:29 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:13:29.113503 | orchestrator | 2025-06-22 12:13:29 | INFO  | Task 1e27c8cb-60c3-40fb-b9b9-0a000469078a is in state SUCCESS 2025-06-22 12:13:29.115471 | orchestrator | 2025-06-22 12:13:29.117015 | orchestrator | 2025-06-22 12:13:29.117151 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:13:29.117172 | orchestrator | 2025-06-22 12:13:29.117225 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:13:29.117237 | orchestrator | Sunday 22 June 2025 12:09:55 +0000 (0:00:00.338) 0:00:00.338 *********** 2025-06-22 12:13:29.117248 | orchestrator | ok: [testbed-manager] 2025-06-22 12:13:29.117260 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:13:29.117270 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:13:29.117281 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:13:29.117292 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:13:29.117302 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:13:29.117312 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:13:29.117323 | orchestrator | 2025-06-22 12:13:29.117334 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:13:29.117345 | orchestrator | Sunday 22 June 2025 12:09:56 +0000 (0:00:01.272) 0:00:01.610 *********** 2025-06-22 12:13:29.117356 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-06-22 12:13:29.117367 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-06-22 12:13:29.117378 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-06-22 12:13:29.117388 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-06-22 12:13:29.117399 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-06-22 12:13:29.117409 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-06-22 12:13:29.117420 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-06-22 12:13:29.117434 | orchestrator | 2025-06-22 12:13:29.117453 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-06-22 12:13:29.117489 | orchestrator | 2025-06-22 12:13:29.117510 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-22 12:13:29.117525 | orchestrator | Sunday 22 June 2025 12:09:57 +0000 (0:00:00.714) 0:00:02.325 *********** 2025-06-22 12:13:29.117539 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:13:29.117553 | orchestrator | 2025-06-22 12:13:29.117565 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-06-22 12:13:29.117591 | orchestrator | Sunday 22 June 2025 12:09:58 +0000 (0:00:01.456) 0:00:03.782 *********** 2025-06-22 12:13:29.117608 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 12:13:29.117644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.117659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.117692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.117720 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.117737 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.117758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.117786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.118129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.118168 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.118180 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.118206 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.118219 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 12:13:29.118232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.118250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.118273 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.118283 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.118294 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.118311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.118321 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.118331 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.118345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.118361 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.118371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.118381 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.118391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.118407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.118417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.118428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.118443 | orchestrator | 2025-06-22 12:13:29.118454 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-22 12:13:29.118464 | orchestrator | Sunday 22 June 2025 12:10:02 +0000 (0:00:03.470) 0:00:07.252 *********** 2025-06-22 12:13:29.118478 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:13:29.118488 | orchestrator | 2025-06-22 12:13:29.118498 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-06-22 12:13:29.118507 | orchestrator | Sunday 22 June 2025 12:10:03 +0000 (0:00:01.310) 0:00:08.562 *********** 2025-06-22 12:13:29.118518 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 12:13:29.118528 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.118538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.118554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.118565 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.118575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.118612 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.118624 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.118634 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.118644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.118654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.118679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.118690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.118713 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.118791 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.118804 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.118837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.118852 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.118862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.118879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.118933 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 12:13:29.118953 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.118964 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.118974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.118984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.119000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.119010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.119043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.119058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.119068 | orchestrator | 2025-06-22 12:13:29.119078 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-06-22 12:13:29.119087 | orchestrator | Sunday 22 June 2025 12:10:08 +0000 (0:00:05.408) 0:00:13.971 *********** 2025-06-22 12:13:29.119097 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 12:13:29.119108 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:13:29.119118 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.119135 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 12:13:29.119151 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:13:29.119176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.119205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119215 | orchestrator | skipping: [testbed-manager] 2025-06-22 12:13:29.119295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:13:29.119308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.119342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:13:29.119362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.119403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119413 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:13:29.119422 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:13:29.119432 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:13:29.119442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:13:29.119458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.119468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.119478 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:13:29.119487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:13:29.119497 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.119519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.119529 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:13:29.119539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:13:29.119549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.119563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.119573 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:13:29.119583 | orchestrator | 2025-06-22 12:13:29.119593 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-06-22 12:13:29.119602 | orchestrator | Sunday 22 June 2025 12:10:10 +0000 (0:00:01.316) 0:00:15.288 *********** 2025-06-22 12:13:29.119612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:13:29.119622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.119663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119674 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 12:13:29.119691 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:13:29.119701 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.119711 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 12:13:29.119735 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:13:29.119755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.119789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119804 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:13:29.119833 | orchestrator | skipping: [testbed-manager] 2025-06-22 12:13:29.119844 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:13:29.119855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:13:29.119865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.119901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 12:13:29.119910 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:13:29.119924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:13:29.119934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.119950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.119960 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:13:29.119970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:13:29.119985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.119995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.120005 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:13:29.120015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 12:13:29.120029 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.120039 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 12:13:29.120053 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:13:29.120063 | orchestrator | 2025-06-22 12:13:29.120073 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-06-22 12:13:29.120082 | orchestrator | Sunday 22 June 2025 12:10:11 +0000 (0:00:01.874) 0:00:17.162 *********** 2025-06-22 12:13:29.120092 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 12:13:29.120102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.120117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.120128 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.120138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.120152 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.120167 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.120177 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.120187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.120197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.120212 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.120222 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.120232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.120246 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.120261 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.120271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.120282 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 12:13:29.120299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.120309 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.120319 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.120338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.120348 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.120358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.120368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.120383 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.120393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.120403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.120425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.120436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.120446 | orchestrator | 2025-06-22 12:13:29.120456 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-06-22 12:13:29.120466 | orchestrator | Sunday 22 June 2025 12:10:18 +0000 (0:00:06.416) 0:00:23.579 *********** 2025-06-22 12:13:29.120475 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 12:13:29.120485 | orchestrator | 2025-06-22 12:13:29.120495 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-06-22 12:13:29.120505 | orchestrator | Sunday 22 June 2025 12:10:19 +0000 (0:00:00.848) 0:00:24.428 *********** 2025-06-22 12:13:29.120515 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055687, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9238257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.120525 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055687, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9238257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.120541 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055687, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9238257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.120552 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055687, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9238257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.120567 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055687, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9238257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.120580 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055687, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9238257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 12:13:29.120591 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1055670, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9218256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.120601 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1055670, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9218256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.120610 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055687, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9238257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.120625 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1055670, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9218256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.120635 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1055670, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9218256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.120650 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1055670, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9218256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.120663 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1055629, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9148254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.120674 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1055629, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9148254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.120684 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1055629, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9148254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.120693 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1055670, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9218256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121084 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1055629, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9148254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121118 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1055629, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9148254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121139 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1055631, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9148254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121156 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1055631, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9148254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121166 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1055670, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9218256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 12:13:29.121176 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1055631, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9148254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121186 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1055631, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9148254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121205 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1055660, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9208255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121216 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1055629, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9148254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121231 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1055631, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9148254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121245 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1055660, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9208255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121295 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1055637, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9178255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121317 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1055660, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9208255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121327 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1055660, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9208255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121343 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1055660, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9208255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121360 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1055631, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9148254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121370 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1055660, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9208255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121393 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1055637, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9178255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121403 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1055637, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9178255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121414 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1055637, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9178255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121424 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1055637, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9178255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121439 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1055629, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9148254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 12:13:29.121466 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1055637, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9178255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121477 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1055655, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9198256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121491 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1055655, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9198256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121501 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1055655, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9198256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121544 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1055655, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9198256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121554 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1055655, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9198256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121564 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1055655, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9198256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121586 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1055672, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9228256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121596 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1055672, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9228256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121610 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1055672, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9228256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121620 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1055672, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9228256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121648 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1055631, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9148254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 12:13:29.121659 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1055684, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9238257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121669 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1055672, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9228256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121698 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1055672, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9228256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121710 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1055684, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9238257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121727 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1055711, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9278257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121744 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1055684, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9238257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121762 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1055684, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9238257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121780 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1055684, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9238257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121804 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1055684, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9238257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121852 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1055711, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9278257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121866 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1055660, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9208255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 12:13:29.121877 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1055711, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9278257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121893 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1055711, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9278257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121904 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1055679, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9228256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121915 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1055679, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9228256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121933 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1055711, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9278257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121951 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1055711, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9278257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121962 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1055679, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9228256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121973 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055633, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9158254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121986 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1055679, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9228256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.121996 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055633, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9158254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122006 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1055679, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9228256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122051 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055633, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9158254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122070 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1055679, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9228256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122081 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1055637, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9178255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 12:13:29.122091 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1055651, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9188256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122105 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1055651, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9188256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122115 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055633, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9158254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122131 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055633, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9158254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122141 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055633, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9158254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122547 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1055651, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9188256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122564 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1055651, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9188256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122574 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055626, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9138255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122589 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1055651, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9188256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122599 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055626, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9138255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122616 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1055651, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9188256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122627 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055626, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9138255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122642 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055626, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9138255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122653 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055626, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9138255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122663 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1055667, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9208255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122677 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1055655, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9198256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 12:13:29.122687 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1055667, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9208255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122702 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055626, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9138255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122712 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1055667, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9208255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122728 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1055709, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9278257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122738 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1055667, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9208255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122748 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1055667, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9208255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122762 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1055709, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9278257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122773 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1055709, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9278257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122788 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1055667, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9208255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122798 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1055648, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9188256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122839 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1055709, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9278257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122852 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1055709, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9278257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122861 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1055648, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9188256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122879 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1055648, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9188256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122895 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1055689, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9248257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122905 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:13:29.122916 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1055689, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9248257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122926 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:13:29.122935 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1055709, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9278257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122951 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1055672, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9228256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 12:13:29.122961 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1055648, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9188256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122971 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1055689, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9248257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.122980 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:13:29.122995 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1055648, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9188256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.123010 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1055648, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9188256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.123020 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1055689, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9248257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.123029 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:13:29.123039 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1055689, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9248257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.123049 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:13:29.123064 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1055689, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9248257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 12:13:29.123074 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:13:29.123084 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1055684, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9238257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 12:13:29.123094 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1055711, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9278257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 12:13:29.123113 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1055679, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9228256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 12:13:29.123129 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055633, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9158254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 12:13:29.123145 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1055651, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9188256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 12:13:29.123162 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1055626, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9138255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 12:13:29.123199 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1055667, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9208255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 12:13:29.123216 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1055709, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9278257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 12:13:29.123233 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1055648, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9188256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 12:13:29.123268 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1055689, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9248257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 12:13:29.123284 | orchestrator | 2025-06-22 12:13:29.123300 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-06-22 12:13:29.123317 | orchestrator | Sunday 22 June 2025 12:10:43 +0000 (0:00:23.930) 0:00:48.358 *********** 2025-06-22 12:13:29.123334 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 12:13:29.123350 | orchestrator | 2025-06-22 12:13:29.123365 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-06-22 12:13:29.123381 | orchestrator | Sunday 22 June 2025 12:10:43 +0000 (0:00:00.774) 0:00:49.133 *********** 2025-06-22 12:13:29.123397 | orchestrator | [WARNING]: Skipped 2025-06-22 12:13:29.123413 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 12:13:29.123429 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-06-22 12:13:29.123444 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 12:13:29.123459 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-06-22 12:13:29.123473 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 12:13:29.123488 | orchestrator | [WARNING]: Skipped 2025-06-22 12:13:29.123504 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 12:13:29.123520 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-06-22 12:13:29.123538 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 12:13:29.123555 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-06-22 12:13:29.123574 | orchestrator | [WARNING]: Skipped 2025-06-22 12:13:29.123591 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 12:13:29.123611 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-06-22 12:13:29.123630 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 12:13:29.123649 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-06-22 12:13:29.123662 | orchestrator | [WARNING]: Skipped 2025-06-22 12:13:29.123673 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 12:13:29.123684 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-06-22 12:13:29.123695 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 12:13:29.123705 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-06-22 12:13:29.123716 | orchestrator | [WARNING]: Skipped 2025-06-22 12:13:29.123727 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 12:13:29.123738 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-06-22 12:13:29.123758 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 12:13:29.123769 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-06-22 12:13:29.123780 | orchestrator | [WARNING]: Skipped 2025-06-22 12:13:29.123790 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 12:13:29.123812 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-06-22 12:13:29.123862 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 12:13:29.123873 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-06-22 12:13:29.123883 | orchestrator | [WARNING]: Skipped 2025-06-22 12:13:29.123894 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 12:13:29.123905 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-06-22 12:13:29.123916 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 12:13:29.123927 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-06-22 12:13:29.123938 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 12:13:29.123949 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-22 12:13:29.123960 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 12:13:29.123970 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 12:13:29.123981 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 12:13:29.123992 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-22 12:13:29.124003 | orchestrator | 2025-06-22 12:13:29.124014 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-06-22 12:13:29.124025 | orchestrator | Sunday 22 June 2025 12:10:46 +0000 (0:00:02.156) 0:00:51.289 *********** 2025-06-22 12:13:29.124035 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 12:13:29.124047 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 12:13:29.124057 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 12:13:29.124069 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 12:13:29.124079 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:13:29.124097 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:13:29.124108 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:13:29.124119 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:13:29.124130 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 12:13:29.124141 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:13:29.124151 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 12:13:29.124162 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:13:29.124173 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-06-22 12:13:29.124184 | orchestrator | 2025-06-22 12:13:29.124195 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-06-22 12:13:29.124206 | orchestrator | Sunday 22 June 2025 12:11:15 +0000 (0:00:29.630) 0:01:20.920 *********** 2025-06-22 12:13:29.124217 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 12:13:29.124228 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:13:29.124238 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 12:13:29.124250 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:13:29.124261 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 12:13:29.124272 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:13:29.124282 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 12:13:29.124293 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:13:29.124304 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 12:13:29.124315 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:13:29.124333 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 12:13:29.124344 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:13:29.124355 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-06-22 12:13:29.124366 | orchestrator | 2025-06-22 12:13:29.124377 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-06-22 12:13:29.124388 | orchestrator | Sunday 22 June 2025 12:11:20 +0000 (0:00:05.110) 0:01:26.030 *********** 2025-06-22 12:13:29.124399 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 12:13:29.124411 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:13:29.124422 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-06-22 12:13:29.124433 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 12:13:29.124444 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:13:29.124455 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 12:13:29.124473 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:13:29.124484 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 12:13:29.124496 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:13:29.124507 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 12:13:29.124519 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:13:29.124531 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 12:13:29.124542 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:13:29.124554 | orchestrator | 2025-06-22 12:13:29.124566 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-06-22 12:13:29.124577 | orchestrator | Sunday 22 June 2025 12:11:23 +0000 (0:00:02.784) 0:01:28.815 *********** 2025-06-22 12:13:29.124589 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 12:13:29.124600 | orchestrator | 2025-06-22 12:13:29.124612 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-06-22 12:13:29.124623 | orchestrator | Sunday 22 June 2025 12:11:24 +0000 (0:00:01.081) 0:01:29.897 *********** 2025-06-22 12:13:29.124635 | orchestrator | skipping: [testbed-manager] 2025-06-22 12:13:29.124646 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:13:29.124658 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:13:29.124669 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:13:29.124681 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:13:29.124692 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:13:29.124703 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:13:29.124715 | orchestrator | 2025-06-22 12:13:29.124726 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-06-22 12:13:29.124738 | orchestrator | Sunday 22 June 2025 12:11:25 +0000 (0:00:00.749) 0:01:30.647 *********** 2025-06-22 12:13:29.124749 | orchestrator | skipping: [testbed-manager] 2025-06-22 12:13:29.124761 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:13:29.124772 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:13:29.124784 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:13:29.124796 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:13:29.124807 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:13:29.124865 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:13:29.124878 | orchestrator | 2025-06-22 12:13:29.124893 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-06-22 12:13:29.124912 | orchestrator | Sunday 22 June 2025 12:11:28 +0000 (0:00:02.999) 0:01:33.646 *********** 2025-06-22 12:13:29.124923 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 12:13:29.124934 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:13:29.124945 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 12:13:29.124955 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 12:13:29.124966 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:13:29.124977 | orchestrator | skipping: [testbed-manager] 2025-06-22 12:13:29.125007 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 12:13:29.125018 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:13:29.125029 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 12:13:29.125040 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:13:29.125050 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 12:13:29.125061 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 12:13:29.125072 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:13:29.125082 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:13:29.125093 | orchestrator | 2025-06-22 12:13:29.125103 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-06-22 12:13:29.125114 | orchestrator | Sunday 22 June 2025 12:11:29 +0000 (0:00:01.475) 0:01:35.121 *********** 2025-06-22 12:13:29.125125 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 12:13:29.125135 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:13:29.125146 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 12:13:29.125157 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:13:29.125168 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-06-22 12:13:29.125178 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 12:13:29.125189 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:13:29.125200 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 12:13:29.125210 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:13:29.125221 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 12:13:29.125231 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:13:29.125242 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 12:13:29.125253 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:13:29.125264 | orchestrator | 2025-06-22 12:13:29.125274 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-06-22 12:13:29.125292 | orchestrator | Sunday 22 June 2025 12:11:32 +0000 (0:00:02.099) 0:01:37.220 *********** 2025-06-22 12:13:29.125303 | orchestrator | [WARNING]: Skipped 2025-06-22 12:13:29.125313 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-06-22 12:13:29.125324 | orchestrator | due to this access issue: 2025-06-22 12:13:29.125335 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-06-22 12:13:29.125345 | orchestrator | not a directory 2025-06-22 12:13:29.125356 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 12:13:29.125367 | orchestrator | 2025-06-22 12:13:29.125377 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-06-22 12:13:29.125392 | orchestrator | Sunday 22 June 2025 12:11:33 +0000 (0:00:01.151) 0:01:38.372 *********** 2025-06-22 12:13:29.125401 | orchestrator | skipping: [testbed-manager] 2025-06-22 12:13:29.125411 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:13:29.125420 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:13:29.125430 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:13:29.125439 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:13:29.125448 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:13:29.125458 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:13:29.125467 | orchestrator | 2025-06-22 12:13:29.125477 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-06-22 12:13:29.125486 | orchestrator | Sunday 22 June 2025 12:11:34 +0000 (0:00:01.306) 0:01:39.678 *********** 2025-06-22 12:13:29.125495 | orchestrator | skipping: [testbed-manager] 2025-06-22 12:13:29.125505 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:13:29.125514 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:13:29.125523 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:13:29.125536 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:13:29.125553 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:13:29.125563 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:13:29.125572 | orchestrator | 2025-06-22 12:13:29.125582 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-06-22 12:13:29.125591 | orchestrator | Sunday 22 June 2025 12:11:36 +0000 (0:00:01.603) 0:01:41.282 *********** 2025-06-22 12:13:29.125606 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 12:13:29.125618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.125629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.125638 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.125654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.125674 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.125685 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.125699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.125709 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.125719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.125729 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 12:13:29.125739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.125762 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 12:13:29.125774 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.125784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.125794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.125805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.125871 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.125893 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.125910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.125920 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.125930 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.125944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.125954 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.125964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.125975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 12:13:29.125996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.126006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.126058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 12:13:29.126071 | orchestrator | 2025-06-22 12:13:29.126081 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-06-22 12:13:29.126091 | orchestrator | Sunday 22 June 2025 12:11:40 +0000 (0:00:04.351) 0:01:45.633 *********** 2025-06-22 12:13:29.126101 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-22 12:13:29.126110 | orchestrator | skipping: [testbed-manager] 2025-06-22 12:13:29.126120 | orchestrator | 2025-06-22 12:13:29.126129 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 12:13:29.126139 | orchestrator | Sunday 22 June 2025 12:11:42 +0000 (0:00:01.795) 0:01:47.429 *********** 2025-06-22 12:13:29.126148 | orchestrator | 2025-06-22 12:13:29.126162 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 12:13:29.126172 | orchestrator | Sunday 22 June 2025 12:11:42 +0000 (0:00:00.503) 0:01:47.932 *********** 2025-06-22 12:13:29.126181 | orchestrator | 2025-06-22 12:13:29.126191 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 12:13:29.126200 | orchestrator | Sunday 22 June 2025 12:11:42 +0000 (0:00:00.129) 0:01:48.062 *********** 2025-06-22 12:13:29.126209 | orchestrator | 2025-06-22 12:13:29.126219 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 12:13:29.126228 | orchestrator | Sunday 22 June 2025 12:11:43 +0000 (0:00:00.142) 0:01:48.205 *********** 2025-06-22 12:13:29.126238 | orchestrator | 2025-06-22 12:13:29.126247 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 12:13:29.126257 | orchestrator | Sunday 22 June 2025 12:11:43 +0000 (0:00:00.137) 0:01:48.342 *********** 2025-06-22 12:13:29.126266 | orchestrator | 2025-06-22 12:13:29.126276 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 12:13:29.126285 | orchestrator | Sunday 22 June 2025 12:11:43 +0000 (0:00:00.099) 0:01:48.442 *********** 2025-06-22 12:13:29.126294 | orchestrator | 2025-06-22 12:13:29.126310 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 12:13:29.126319 | orchestrator | Sunday 22 June 2025 12:11:43 +0000 (0:00:00.112) 0:01:48.555 *********** 2025-06-22 12:13:29.126329 | orchestrator | 2025-06-22 12:13:29.126338 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-06-22 12:13:29.126348 | orchestrator | Sunday 22 June 2025 12:11:43 +0000 (0:00:00.180) 0:01:48.736 *********** 2025-06-22 12:13:29.126357 | orchestrator | changed: [testbed-manager] 2025-06-22 12:13:29.126367 | orchestrator | 2025-06-22 12:13:29.126376 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-06-22 12:13:29.126386 | orchestrator | Sunday 22 June 2025 12:11:58 +0000 (0:00:15.194) 0:02:03.930 *********** 2025-06-22 12:13:29.126395 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:13:29.126405 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:13:29.126414 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:13:29.126424 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:13:29.126433 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:13:29.126443 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:13:29.126452 | orchestrator | changed: [testbed-manager] 2025-06-22 12:13:29.126461 | orchestrator | 2025-06-22 12:13:29.126471 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-06-22 12:13:29.126480 | orchestrator | Sunday 22 June 2025 12:12:15 +0000 (0:00:16.598) 0:02:20.529 *********** 2025-06-22 12:13:29.126490 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:13:29.126499 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:13:29.126509 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:13:29.126518 | orchestrator | 2025-06-22 12:13:29.126528 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-06-22 12:13:29.126537 | orchestrator | Sunday 22 June 2025 12:12:21 +0000 (0:00:06.507) 0:02:27.037 *********** 2025-06-22 12:13:29.126547 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:13:29.126556 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:13:29.126566 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:13:29.126575 | orchestrator | 2025-06-22 12:13:29.126585 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-06-22 12:13:29.126594 | orchestrator | Sunday 22 June 2025 12:12:31 +0000 (0:00:09.992) 0:02:37.029 *********** 2025-06-22 12:13:29.126604 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:13:29.126613 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:13:29.126623 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:13:29.126632 | orchestrator | changed: [testbed-manager] 2025-06-22 12:13:29.126642 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:13:29.126657 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:13:29.126667 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:13:29.126677 | orchestrator | 2025-06-22 12:13:29.126686 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-06-22 12:13:29.126696 | orchestrator | Sunday 22 June 2025 12:12:52 +0000 (0:00:20.857) 0:02:57.886 *********** 2025-06-22 12:13:29.126705 | orchestrator | changed: [testbed-manager] 2025-06-22 12:13:29.126715 | orchestrator | 2025-06-22 12:13:29.126724 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-06-22 12:13:29.126734 | orchestrator | Sunday 22 June 2025 12:13:01 +0000 (0:00:08.818) 0:03:06.705 *********** 2025-06-22 12:13:29.126743 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:13:29.126753 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:13:29.126762 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:13:29.126772 | orchestrator | 2025-06-22 12:13:29.126781 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-06-22 12:13:29.126791 | orchestrator | Sunday 22 June 2025 12:13:11 +0000 (0:00:10.339) 0:03:17.045 *********** 2025-06-22 12:13:29.126800 | orchestrator | changed: [testbed-manager] 2025-06-22 12:13:29.126810 | orchestrator | 2025-06-22 12:13:29.126838 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-06-22 12:13:29.126855 | orchestrator | Sunday 22 June 2025 12:13:17 +0000 (0:00:05.421) 0:03:22.466 *********** 2025-06-22 12:13:29.126865 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:13:29.126874 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:13:29.126884 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:13:29.126893 | orchestrator | 2025-06-22 12:13:29.126903 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:13:29.126913 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 12:13:29.126923 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 12:13:29.126933 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 12:13:29.126947 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 12:13:29.126957 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-22 12:13:29.126966 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-22 12:13:29.126976 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-22 12:13:29.126986 | orchestrator | 2025-06-22 12:13:29.126995 | orchestrator | 2025-06-22 12:13:29.127005 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:13:29.127014 | orchestrator | Sunday 22 June 2025 12:13:27 +0000 (0:00:10.078) 0:03:32.545 *********** 2025-06-22 12:13:29.127024 | orchestrator | =============================================================================== 2025-06-22 12:13:29.127033 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 29.63s 2025-06-22 12:13:29.127043 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.93s 2025-06-22 12:13:29.127052 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 20.86s 2025-06-22 12:13:29.127062 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.60s 2025-06-22 12:13:29.127071 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 15.19s 2025-06-22 12:13:29.127080 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.34s 2025-06-22 12:13:29.127090 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.08s 2025-06-22 12:13:29.127099 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 9.99s 2025-06-22 12:13:29.127109 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.82s 2025-06-22 12:13:29.127120 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 6.51s 2025-06-22 12:13:29.127137 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.42s 2025-06-22 12:13:29.127153 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.42s 2025-06-22 12:13:29.127169 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.41s 2025-06-22 12:13:29.127184 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.11s 2025-06-22 12:13:29.127200 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.36s 2025-06-22 12:13:29.127218 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.47s 2025-06-22 12:13:29.127235 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.00s 2025-06-22 12:13:29.127252 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.78s 2025-06-22 12:13:29.127269 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.16s 2025-06-22 12:13:29.127279 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.10s 2025-06-22 12:13:29.127295 | orchestrator | 2025-06-22 12:13:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:32.168271 | orchestrator | 2025-06-22 12:13:32 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:13:32.171494 | orchestrator | 2025-06-22 12:13:32 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:32.171895 | orchestrator | 2025-06-22 12:13:32 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:32.174097 | orchestrator | 2025-06-22 12:13:32 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:13:32.174133 | orchestrator | 2025-06-22 12:13:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:35.217786 | orchestrator | 2025-06-22 12:13:35 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:13:35.219399 | orchestrator | 2025-06-22 12:13:35 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:35.221089 | orchestrator | 2025-06-22 12:13:35 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:35.222991 | orchestrator | 2025-06-22 12:13:35 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:13:35.223103 | orchestrator | 2025-06-22 12:13:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:38.268497 | orchestrator | 2025-06-22 12:13:38 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:13:38.271309 | orchestrator | 2025-06-22 12:13:38 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:38.272697 | orchestrator | 2025-06-22 12:13:38 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:38.274970 | orchestrator | 2025-06-22 12:13:38 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:13:38.275006 | orchestrator | 2025-06-22 12:13:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:41.322694 | orchestrator | 2025-06-22 12:13:41 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:13:41.324959 | orchestrator | 2025-06-22 12:13:41 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:41.326798 | orchestrator | 2025-06-22 12:13:41 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:41.328308 | orchestrator | 2025-06-22 12:13:41 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:13:41.328450 | orchestrator | 2025-06-22 12:13:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:44.396502 | orchestrator | 2025-06-22 12:13:44 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:13:44.400475 | orchestrator | 2025-06-22 12:13:44 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:44.400512 | orchestrator | 2025-06-22 12:13:44 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:44.400524 | orchestrator | 2025-06-22 12:13:44 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:13:44.400536 | orchestrator | 2025-06-22 12:13:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:47.445226 | orchestrator | 2025-06-22 12:13:47 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:13:47.445497 | orchestrator | 2025-06-22 12:13:47 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:47.446651 | orchestrator | 2025-06-22 12:13:47 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:47.448481 | orchestrator | 2025-06-22 12:13:47 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:13:47.448511 | orchestrator | 2025-06-22 12:13:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:50.500712 | orchestrator | 2025-06-22 12:13:50 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:13:50.503092 | orchestrator | 2025-06-22 12:13:50 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:50.504286 | orchestrator | 2025-06-22 12:13:50 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:50.506456 | orchestrator | 2025-06-22 12:13:50 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:13:50.506547 | orchestrator | 2025-06-22 12:13:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:53.554571 | orchestrator | 2025-06-22 12:13:53 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:13:53.557005 | orchestrator | 2025-06-22 12:13:53 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:53.559911 | orchestrator | 2025-06-22 12:13:53 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:53.562919 | orchestrator | 2025-06-22 12:13:53 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:13:53.563135 | orchestrator | 2025-06-22 12:13:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:56.617936 | orchestrator | 2025-06-22 12:13:56 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:13:56.618500 | orchestrator | 2025-06-22 12:13:56 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:56.620457 | orchestrator | 2025-06-22 12:13:56 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:56.624491 | orchestrator | 2025-06-22 12:13:56 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:13:56.624544 | orchestrator | 2025-06-22 12:13:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:13:59.678433 | orchestrator | 2025-06-22 12:13:59 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:13:59.680851 | orchestrator | 2025-06-22 12:13:59 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:13:59.682199 | orchestrator | 2025-06-22 12:13:59 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:13:59.684409 | orchestrator | 2025-06-22 12:13:59 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:13:59.684516 | orchestrator | 2025-06-22 12:13:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:14:02.732254 | orchestrator | 2025-06-22 12:14:02 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:14:02.733612 | orchestrator | 2025-06-22 12:14:02 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:14:02.735335 | orchestrator | 2025-06-22 12:14:02 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:14:02.737064 | orchestrator | 2025-06-22 12:14:02 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:14:02.737141 | orchestrator | 2025-06-22 12:14:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:14:05.781788 | orchestrator | 2025-06-22 12:14:05 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:14:05.783526 | orchestrator | 2025-06-22 12:14:05 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:14:05.785674 | orchestrator | 2025-06-22 12:14:05 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:14:05.788341 | orchestrator | 2025-06-22 12:14:05 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:14:05.789093 | orchestrator | 2025-06-22 12:14:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:14:08.827994 | orchestrator | 2025-06-22 12:14:08 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:14:08.830655 | orchestrator | 2025-06-22 12:14:08 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:14:08.834623 | orchestrator | 2025-06-22 12:14:08 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:14:08.835522 | orchestrator | 2025-06-22 12:14:08 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:14:08.835696 | orchestrator | 2025-06-22 12:14:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:14:11.885138 | orchestrator | 2025-06-22 12:14:11 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:14:11.887241 | orchestrator | 2025-06-22 12:14:11 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:14:11.889887 | orchestrator | 2025-06-22 12:14:11 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:14:11.892036 | orchestrator | 2025-06-22 12:14:11 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:14:11.892364 | orchestrator | 2025-06-22 12:14:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:14:14.940342 | orchestrator | 2025-06-22 12:14:14 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:14:14.941444 | orchestrator | 2025-06-22 12:14:14 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:14:14.945238 | orchestrator | 2025-06-22 12:14:14 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:14:14.947409 | orchestrator | 2025-06-22 12:14:14 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:14:14.947436 | orchestrator | 2025-06-22 12:14:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:14:17.994577 | orchestrator | 2025-06-22 12:14:17 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state STARTED 2025-06-22 12:14:17.995364 | orchestrator | 2025-06-22 12:14:17 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:14:17.996495 | orchestrator | 2025-06-22 12:14:17 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:14:17.997864 | orchestrator | 2025-06-22 12:14:17 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:14:17.997895 | orchestrator | 2025-06-22 12:14:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:14:21.059401 | orchestrator | 2025-06-22 12:14:21.059495 | orchestrator | 2025-06-22 12:14:21.059506 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:14:21.059513 | orchestrator | 2025-06-22 12:14:21.059519 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:14:21.059526 | orchestrator | Sunday 22 June 2025 12:13:11 +0000 (0:00:00.266) 0:00:00.266 *********** 2025-06-22 12:14:21.059554 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:14:21.059562 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:14:21.059567 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:14:21.059573 | orchestrator | 2025-06-22 12:14:21.059580 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:14:21.059587 | orchestrator | Sunday 22 June 2025 12:13:11 +0000 (0:00:00.304) 0:00:00.571 *********** 2025-06-22 12:14:21.059593 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-06-22 12:14:21.059600 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-06-22 12:14:21.059607 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-06-22 12:14:21.059613 | orchestrator | 2025-06-22 12:14:21.059618 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-06-22 12:14:21.059624 | orchestrator | 2025-06-22 12:14:21.059630 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-22 12:14:21.059636 | orchestrator | Sunday 22 June 2025 12:13:11 +0000 (0:00:00.483) 0:00:01.054 *********** 2025-06-22 12:14:21.059642 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:14:21.059649 | orchestrator | 2025-06-22 12:14:21.059655 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-06-22 12:14:21.059662 | orchestrator | Sunday 22 June 2025 12:13:12 +0000 (0:00:00.546) 0:00:01.600 *********** 2025-06-22 12:14:21.059668 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-06-22 12:14:21.059674 | orchestrator | 2025-06-22 12:14:21.059680 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-06-22 12:14:21.059687 | orchestrator | Sunday 22 June 2025 12:13:15 +0000 (0:00:03.489) 0:00:05.090 *********** 2025-06-22 12:14:21.059693 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-06-22 12:14:21.059699 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-06-22 12:14:21.059704 | orchestrator | 2025-06-22 12:14:21.059710 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-06-22 12:14:21.059716 | orchestrator | Sunday 22 June 2025 12:13:22 +0000 (0:00:06.618) 0:00:11.709 *********** 2025-06-22 12:14:21.059722 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 12:14:21.059729 | orchestrator | 2025-06-22 12:14:21.059735 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-06-22 12:14:21.059742 | orchestrator | Sunday 22 June 2025 12:13:25 +0000 (0:00:03.270) 0:00:14.979 *********** 2025-06-22 12:14:21.059748 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 12:14:21.059754 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-06-22 12:14:21.059760 | orchestrator | 2025-06-22 12:14:21.059766 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-06-22 12:14:21.059772 | orchestrator | Sunday 22 June 2025 12:13:29 +0000 (0:00:04.012) 0:00:18.992 *********** 2025-06-22 12:14:21.059778 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 12:14:21.059784 | orchestrator | 2025-06-22 12:14:21.059819 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-06-22 12:14:21.059826 | orchestrator | Sunday 22 June 2025 12:13:33 +0000 (0:00:03.176) 0:00:22.169 *********** 2025-06-22 12:14:21.059832 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-06-22 12:14:21.059838 | orchestrator | 2025-06-22 12:14:21.059845 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-22 12:14:21.059851 | orchestrator | Sunday 22 June 2025 12:13:37 +0000 (0:00:04.074) 0:00:26.243 *********** 2025-06-22 12:14:21.059857 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:21.059863 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:21.059869 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:21.059875 | orchestrator | 2025-06-22 12:14:21.059890 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-06-22 12:14:21.059896 | orchestrator | Sunday 22 June 2025 12:13:37 +0000 (0:00:00.250) 0:00:26.494 *********** 2025-06-22 12:14:21.059905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 12:14:21.059932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 12:14:21.059940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 12:14:21.059947 | orchestrator | 2025-06-22 12:14:21.059954 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-06-22 12:14:21.059961 | orchestrator | Sunday 22 June 2025 12:13:38 +0000 (0:00:00.782) 0:00:27.276 *********** 2025-06-22 12:14:21.059968 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:21.059975 | orchestrator | 2025-06-22 12:14:21.059981 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-06-22 12:14:21.059988 | orchestrator | Sunday 22 June 2025 12:13:38 +0000 (0:00:00.114) 0:00:27.390 *********** 2025-06-22 12:14:21.059995 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:21.060002 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:21.060009 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:21.060016 | orchestrator | 2025-06-22 12:14:21.060023 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-22 12:14:21.060030 | orchestrator | Sunday 22 June 2025 12:13:38 +0000 (0:00:00.436) 0:00:27.827 *********** 2025-06-22 12:14:21.060036 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:14:21.060050 | orchestrator | 2025-06-22 12:14:21.060056 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-06-22 12:14:21.060063 | orchestrator | Sunday 22 June 2025 12:13:39 +0000 (0:00:00.541) 0:00:28.368 *********** 2025-06-22 12:14:21.060069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 12:14:21.060084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 12:14:21.060106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 12:14:21.060113 | orchestrator | 2025-06-22 12:14:21.060119 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-06-22 12:14:21.060124 | orchestrator | Sunday 22 June 2025 12:13:40 +0000 (0:00:01.462) 0:00:29.831 *********** 2025-06-22 12:14:21.060130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 12:14:21.060142 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:21.060148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 12:14:21.060153 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:21.060165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 12:14:21.060171 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:21.060176 | orchestrator | 2025-06-22 12:14:21.060182 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-06-22 12:14:21.060188 | orchestrator | Sunday 22 June 2025 12:13:41 +0000 (0:00:00.705) 0:00:30.536 *********** 2025-06-22 12:14:21.060198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 12:14:21.060204 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:21.060210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 12:14:21.060221 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:21.060227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 12:14:21.060232 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:21.060237 | orchestrator | 2025-06-22 12:14:21.060243 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-06-22 12:14:21.060249 | orchestrator | Sunday 22 June 2025 12:13:42 +0000 (0:00:00.665) 0:00:31.202 *********** 2025-06-22 12:14:21.060262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 12:14:21.060272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 12:14:21.060278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 12:14:21.060288 | orchestrator | 2025-06-22 12:14:21.060294 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-06-22 12:14:21.060300 | orchestrator | Sunday 22 June 2025 12:13:43 +0000 (0:00:01.429) 0:00:32.631 *********** 2025-06-22 12:14:21.060305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 12:14:21.060311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 12:14:21.060325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 12:14:21.060331 | orchestrator | 2025-06-22 12:14:21.060337 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-06-22 12:14:21.060343 | orchestrator | Sunday 22 June 2025 12:13:46 +0000 (0:00:02.530) 0:00:35.162 *********** 2025-06-22 12:14:21.060348 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-22 12:14:21.060355 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-22 12:14:21.060364 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-22 12:14:21.060370 | orchestrator | 2025-06-22 12:14:21.060376 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-06-22 12:14:21.060382 | orchestrator | Sunday 22 June 2025 12:13:47 +0000 (0:00:01.434) 0:00:36.596 *********** 2025-06-22 12:14:21.060387 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:14:21.060393 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:14:21.060398 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:14:21.060404 | orchestrator | 2025-06-22 12:14:21.060409 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-06-22 12:14:21.060414 | orchestrator | Sunday 22 June 2025 12:13:48 +0000 (0:00:01.371) 0:00:37.968 *********** 2025-06-22 12:14:21.060421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 12:14:21.060427 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:21.060432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 12:14:21.060438 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:21.060458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 12:14:21.060465 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:21.060470 | orchestrator | 2025-06-22 12:14:21.060476 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-06-22 12:14:21.060489 | orchestrator | Sunday 22 June 2025 12:13:49 +0000 (0:00:00.481) 0:00:38.449 *********** 2025-06-22 12:14:21.060495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 12:14:21.060501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 12:14:21.060507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 12:14:21.060513 | orchestrator | 2025-06-22 12:14:21.060519 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-06-22 12:14:21.060525 | orchestrator | Sunday 22 June 2025 12:13:50 +0000 (0:00:01.366) 0:00:39.815 *********** 2025-06-22 12:14:21.060530 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:14:21.060535 | orchestrator | 2025-06-22 12:14:21.060541 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-06-22 12:14:21.060546 | orchestrator | Sunday 22 June 2025 12:13:52 +0000 (0:00:02.166) 0:00:41.982 *********** 2025-06-22 12:14:21.060552 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:14:21.060557 | orchestrator | 2025-06-22 12:14:21.060563 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-06-22 12:14:21.060568 | orchestrator | Sunday 22 June 2025 12:13:55 +0000 (0:00:02.363) 0:00:44.346 *********** 2025-06-22 12:14:21.060577 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:14:21.060583 | orchestrator | 2025-06-22 12:14:21.060591 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-22 12:14:21.060599 | orchestrator | Sunday 22 June 2025 12:14:08 +0000 (0:00:13.389) 0:00:57.735 *********** 2025-06-22 12:14:21.060613 | orchestrator | 2025-06-22 12:14:21.060621 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-22 12:14:21.060630 | orchestrator | Sunday 22 June 2025 12:14:08 +0000 (0:00:00.058) 0:00:57.794 *********** 2025-06-22 12:14:21.060638 | orchestrator | 2025-06-22 12:14:21.060645 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-22 12:14:21.060654 | orchestrator | Sunday 22 June 2025 12:14:08 +0000 (0:00:00.061) 0:00:57.855 *********** 2025-06-22 12:14:21.060663 | orchestrator | 2025-06-22 12:14:21.060675 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-06-22 12:14:21.060684 | orchestrator | Sunday 22 June 2025 12:14:08 +0000 (0:00:00.061) 0:00:57.917 *********** 2025-06-22 12:14:21.060692 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:14:21.060699 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:14:21.060707 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:14:21.060715 | orchestrator | 2025-06-22 12:14:21.060724 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:14:21.060733 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 12:14:21.060743 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 12:14:21.060751 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 12:14:21.060758 | orchestrator | 2025-06-22 12:14:21.060766 | orchestrator | 2025-06-22 12:14:21.060775 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:14:21.060783 | orchestrator | Sunday 22 June 2025 12:14:19 +0000 (0:00:10.248) 0:01:08.166 *********** 2025-06-22 12:14:21.060815 | orchestrator | =============================================================================== 2025-06-22 12:14:21.060822 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.39s 2025-06-22 12:14:21.060828 | orchestrator | placement : Restart placement-api container ---------------------------- 10.25s 2025-06-22 12:14:21.060836 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.62s 2025-06-22 12:14:21.060844 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.07s 2025-06-22 12:14:21.060853 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.01s 2025-06-22 12:14:21.060860 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.49s 2025-06-22 12:14:21.060869 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.27s 2025-06-22 12:14:21.060877 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.18s 2025-06-22 12:14:21.060885 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.53s 2025-06-22 12:14:21.060893 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.36s 2025-06-22 12:14:21.060902 | orchestrator | placement : Creating placement databases -------------------------------- 2.17s 2025-06-22 12:14:21.060910 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.46s 2025-06-22 12:14:21.060918 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.43s 2025-06-22 12:14:21.060926 | orchestrator | placement : Copying over config.json files for services ----------------- 1.43s 2025-06-22 12:14:21.060934 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.37s 2025-06-22 12:14:21.060942 | orchestrator | placement : Check placement containers ---------------------------------- 1.37s 2025-06-22 12:14:21.060951 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.78s 2025-06-22 12:14:21.060959 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.71s 2025-06-22 12:14:21.060975 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.67s 2025-06-22 12:14:21.060983 | orchestrator | placement : include_tasks ----------------------------------------------- 0.55s 2025-06-22 12:14:21.060991 | orchestrator | 2025-06-22 12:14:21 | INFO  | Task ed4d496c-1244-4987-bb6b-e4a064b2e9a4 is in state SUCCESS 2025-06-22 12:14:21.060999 | orchestrator | 2025-06-22 12:14:21 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:14:21.061588 | orchestrator | 2025-06-22 12:14:21 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:14:21.062710 | orchestrator | 2025-06-22 12:14:21 | INFO  | Task 3b6b5079-75d3-4d71-a761-0d07d1a69b11 is in state STARTED 2025-06-22 12:14:21.063876 | orchestrator | 2025-06-22 12:14:21 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:14:21.063960 | orchestrator | 2025-06-22 12:14:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:14:24.114421 | orchestrator | 2025-06-22 12:14:24 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:14:24.114524 | orchestrator | 2025-06-22 12:14:24 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:14:24.115975 | orchestrator | 2025-06-22 12:14:24 | INFO  | Task 3b6b5079-75d3-4d71-a761-0d07d1a69b11 is in state STARTED 2025-06-22 12:14:24.117079 | orchestrator | 2025-06-22 12:14:24 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:14:24.117101 | orchestrator | 2025-06-22 12:14:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:14:27.161294 | orchestrator | 2025-06-22 12:14:27 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:14:27.163032 | orchestrator | 2025-06-22 12:14:27 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:14:27.165747 | orchestrator | 2025-06-22 12:14:27 | INFO  | Task 3b6b5079-75d3-4d71-a761-0d07d1a69b11 is in state STARTED 2025-06-22 12:14:27.167882 | orchestrator | 2025-06-22 12:14:27 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:14:27.167909 | orchestrator | 2025-06-22 12:14:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:14:30.216139 | orchestrator | 2025-06-22 12:14:30 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:14:30.217707 | orchestrator | 2025-06-22 12:14:30 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:14:30.220754 | orchestrator | 2025-06-22 12:14:30 | INFO  | Task 3b6b5079-75d3-4d71-a761-0d07d1a69b11 is in state STARTED 2025-06-22 12:14:30.222894 | orchestrator | 2025-06-22 12:14:30 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:14:30.222921 | orchestrator | 2025-06-22 12:14:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:14:33.271826 | orchestrator | 2025-06-22 12:14:33 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:14:33.272433 | orchestrator | 2025-06-22 12:14:33 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:14:33.273914 | orchestrator | 2025-06-22 12:14:33 | INFO  | Task 3b6b5079-75d3-4d71-a761-0d07d1a69b11 is in state STARTED 2025-06-22 12:14:33.275924 | orchestrator | 2025-06-22 12:14:33 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:14:33.275951 | orchestrator | 2025-06-22 12:14:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:14:36.328872 | orchestrator | 2025-06-22 12:14:36 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:14:36.329010 | orchestrator | 2025-06-22 12:14:36 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:14:36.330234 | orchestrator | 2025-06-22 12:14:36 | INFO  | Task 3b6b5079-75d3-4d71-a761-0d07d1a69b11 is in state STARTED 2025-06-22 12:14:36.333216 | orchestrator | 2025-06-22 12:14:36 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:14:36.333262 | orchestrator | 2025-06-22 12:14:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:14:39.373747 | orchestrator | 2025-06-22 12:14:39 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:14:39.374158 | orchestrator | 2025-06-22 12:14:39 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:14:39.375283 | orchestrator | 2025-06-22 12:14:39 | INFO  | Task 3b6b5079-75d3-4d71-a761-0d07d1a69b11 is in state STARTED 2025-06-22 12:14:39.376369 | orchestrator | 2025-06-22 12:14:39 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:14:39.376397 | orchestrator | 2025-06-22 12:14:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:14:42.425853 | orchestrator | 2025-06-22 12:14:42 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:14:42.429214 | orchestrator | 2025-06-22 12:14:42 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:14:42.431614 | orchestrator | 2025-06-22 12:14:42 | INFO  | Task 3b6b5079-75d3-4d71-a761-0d07d1a69b11 is in state STARTED 2025-06-22 12:14:42.433539 | orchestrator | 2025-06-22 12:14:42 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:14:42.433846 | orchestrator | 2025-06-22 12:14:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:14:45.479273 | orchestrator | 2025-06-22 12:14:45 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:14:45.482140 | orchestrator | 2025-06-22 12:14:45 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state STARTED 2025-06-22 12:14:45.483498 | orchestrator | 2025-06-22 12:14:45 | INFO  | Task 3b6b5079-75d3-4d71-a761-0d07d1a69b11 is in state STARTED 2025-06-22 12:14:45.484469 | orchestrator | 2025-06-22 12:14:45 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:14:45.484501 | orchestrator | 2025-06-22 12:14:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:14:48.535720 | orchestrator | 2025-06-22 12:14:48 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:14:48.542755 | orchestrator | 2025-06-22 12:14:48 | INFO  | Task 80668b58-ccdf-4015-8452-e52c801c5a0d is in state SUCCESS 2025-06-22 12:14:48.543703 | orchestrator | 2025-06-22 12:14:48.543758 | orchestrator | 2025-06-22 12:14:48.543815 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:14:48.543829 | orchestrator | 2025-06-22 12:14:48.543841 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:14:48.543852 | orchestrator | Sunday 22 June 2025 12:09:56 +0000 (0:00:00.358) 0:00:00.358 *********** 2025-06-22 12:14:48.543865 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:14:48.543877 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:14:48.543888 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:14:48.543899 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:14:48.543910 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:14:48.543920 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:14:48.543931 | orchestrator | 2025-06-22 12:14:48.543942 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:14:48.543953 | orchestrator | Sunday 22 June 2025 12:09:57 +0000 (0:00:00.922) 0:00:01.281 *********** 2025-06-22 12:14:48.543991 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-06-22 12:14:48.544003 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-06-22 12:14:48.544014 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-06-22 12:14:48.544025 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-06-22 12:14:48.544036 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-06-22 12:14:48.544046 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-06-22 12:14:48.544058 | orchestrator | 2025-06-22 12:14:48.544068 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-06-22 12:14:48.544080 | orchestrator | 2025-06-22 12:14:48.544091 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-22 12:14:48.544102 | orchestrator | Sunday 22 June 2025 12:09:57 +0000 (0:00:00.677) 0:00:01.959 *********** 2025-06-22 12:14:48.544113 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:14:48.544126 | orchestrator | 2025-06-22 12:14:48.544137 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-06-22 12:14:48.544148 | orchestrator | Sunday 22 June 2025 12:09:58 +0000 (0:00:00.962) 0:00:02.921 *********** 2025-06-22 12:14:48.544159 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:14:48.544170 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:14:48.544181 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:14:48.544192 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:14:48.544286 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:14:48.544299 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:14:48.544311 | orchestrator | 2025-06-22 12:14:48.544323 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-06-22 12:14:48.544336 | orchestrator | Sunday 22 June 2025 12:10:00 +0000 (0:00:02.165) 0:00:05.087 *********** 2025-06-22 12:14:48.544347 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:14:48.544360 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:14:48.544372 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:14:48.544384 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:14:48.544395 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:14:48.544408 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:14:48.544420 | orchestrator | 2025-06-22 12:14:48.544432 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-06-22 12:14:48.544444 | orchestrator | Sunday 22 June 2025 12:10:02 +0000 (0:00:01.211) 0:00:06.298 *********** 2025-06-22 12:14:48.544456 | orchestrator | ok: [testbed-node-0] => { 2025-06-22 12:14:48.544521 | orchestrator |  "changed": false, 2025-06-22 12:14:48.544922 | orchestrator |  "msg": "All assertions passed" 2025-06-22 12:14:48.544936 | orchestrator | } 2025-06-22 12:14:48.544947 | orchestrator | ok: [testbed-node-1] => { 2025-06-22 12:14:48.544958 | orchestrator |  "changed": false, 2025-06-22 12:14:48.544969 | orchestrator |  "msg": "All assertions passed" 2025-06-22 12:14:48.544979 | orchestrator | } 2025-06-22 12:14:48.544990 | orchestrator | ok: [testbed-node-2] => { 2025-06-22 12:14:48.545000 | orchestrator |  "changed": false, 2025-06-22 12:14:48.545012 | orchestrator |  "msg": "All assertions passed" 2025-06-22 12:14:48.545023 | orchestrator | } 2025-06-22 12:14:48.545033 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 12:14:48.545044 | orchestrator |  "changed": false, 2025-06-22 12:14:48.545055 | orchestrator |  "msg": "All assertions passed" 2025-06-22 12:14:48.545276 | orchestrator | } 2025-06-22 12:14:48.545288 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 12:14:48.545299 | orchestrator |  "changed": false, 2025-06-22 12:14:48.545310 | orchestrator |  "msg": "All assertions passed" 2025-06-22 12:14:48.545320 | orchestrator | } 2025-06-22 12:14:48.545331 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 12:14:48.545342 | orchestrator |  "changed": false, 2025-06-22 12:14:48.545353 | orchestrator |  "msg": "All assertions passed" 2025-06-22 12:14:48.545364 | orchestrator | } 2025-06-22 12:14:48.545386 | orchestrator | 2025-06-22 12:14:48.545397 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-06-22 12:14:48.545408 | orchestrator | Sunday 22 June 2025 12:10:02 +0000 (0:00:00.678) 0:00:06.977 *********** 2025-06-22 12:14:48.545476 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.545488 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.545499 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.545509 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.545520 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.545531 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.545542 | orchestrator | 2025-06-22 12:14:48.545553 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-06-22 12:14:48.545564 | orchestrator | Sunday 22 June 2025 12:10:03 +0000 (0:00:00.442) 0:00:07.419 *********** 2025-06-22 12:14:48.545574 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-06-22 12:14:48.545585 | orchestrator | 2025-06-22 12:14:48.545596 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-06-22 12:14:48.545607 | orchestrator | Sunday 22 June 2025 12:10:06 +0000 (0:00:03.423) 0:00:10.842 *********** 2025-06-22 12:14:48.545618 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-06-22 12:14:48.545630 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-06-22 12:14:48.545641 | orchestrator | 2025-06-22 12:14:48.545703 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-06-22 12:14:48.545717 | orchestrator | Sunday 22 June 2025 12:10:13 +0000 (0:00:06.763) 0:00:17.606 *********** 2025-06-22 12:14:48.545728 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 12:14:48.545739 | orchestrator | 2025-06-22 12:14:48.545749 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-06-22 12:14:48.545760 | orchestrator | Sunday 22 June 2025 12:10:16 +0000 (0:00:03.507) 0:00:21.114 *********** 2025-06-22 12:14:48.545771 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 12:14:48.545803 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-06-22 12:14:48.545814 | orchestrator | 2025-06-22 12:14:48.545825 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-06-22 12:14:48.545836 | orchestrator | Sunday 22 June 2025 12:10:20 +0000 (0:00:04.079) 0:00:25.193 *********** 2025-06-22 12:14:48.545846 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 12:14:48.545857 | orchestrator | 2025-06-22 12:14:48.545867 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-06-22 12:14:48.545878 | orchestrator | Sunday 22 June 2025 12:10:24 +0000 (0:00:04.048) 0:00:29.242 *********** 2025-06-22 12:14:48.545889 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-06-22 12:14:48.545899 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-06-22 12:14:48.545910 | orchestrator | 2025-06-22 12:14:48.545921 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-22 12:14:48.545932 | orchestrator | Sunday 22 June 2025 12:10:33 +0000 (0:00:08.439) 0:00:37.681 *********** 2025-06-22 12:14:48.545943 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.545953 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.545964 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.545975 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.545985 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.545996 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.546006 | orchestrator | 2025-06-22 12:14:48.546066 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-06-22 12:14:48.546080 | orchestrator | Sunday 22 June 2025 12:10:34 +0000 (0:00:00.787) 0:00:38.469 *********** 2025-06-22 12:14:48.546091 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.546101 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.546123 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.546136 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.546147 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.546159 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.546171 | orchestrator | 2025-06-22 12:14:48.546183 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-06-22 12:14:48.546196 | orchestrator | Sunday 22 June 2025 12:10:36 +0000 (0:00:02.355) 0:00:40.824 *********** 2025-06-22 12:14:48.546208 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:14:48.546220 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:14:48.546232 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:14:48.546244 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:14:48.546256 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:14:48.546269 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:14:48.546280 | orchestrator | 2025-06-22 12:14:48.546293 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-22 12:14:48.546305 | orchestrator | Sunday 22 June 2025 12:10:38 +0000 (0:00:01.908) 0:00:42.732 *********** 2025-06-22 12:14:48.546317 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.546329 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.546341 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.546353 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.546365 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.546377 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.546389 | orchestrator | 2025-06-22 12:14:48.546402 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-06-22 12:14:48.546414 | orchestrator | Sunday 22 June 2025 12:10:40 +0000 (0:00:02.529) 0:00:45.262 *********** 2025-06-22 12:14:48.546430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.546493 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 12:14:48.546509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.546528 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 12:14:48.546540 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 12:14:48.546551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.546563 | orchestrator | 2025-06-22 12:14:48.546574 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-06-22 12:14:48.546585 | orchestrator | Sunday 22 June 2025 12:10:44 +0000 (0:00:03.442) 0:00:48.705 *********** 2025-06-22 12:14:48.546596 | orchestrator | [WARNING]: Skipped 2025-06-22 12:14:48.546608 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-06-22 12:14:48.546619 | orchestrator | due to this access issue: 2025-06-22 12:14:48.546630 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-06-22 12:14:48.546640 | orchestrator | a directory 2025-06-22 12:14:48.546652 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 12:14:48.546662 | orchestrator | 2025-06-22 12:14:48.546706 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-22 12:14:48.546719 | orchestrator | Sunday 22 June 2025 12:10:45 +0000 (0:00:01.365) 0:00:50.070 *********** 2025-06-22 12:14:48.546731 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:14:48.546743 | orchestrator | 2025-06-22 12:14:48.546760 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-06-22 12:14:48.546788 | orchestrator | Sunday 22 June 2025 12:10:47 +0000 (0:00:01.778) 0:00:51.849 *********** 2025-06-22 12:14:48.546801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.546814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.546825 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 12:14:48.546837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.546892 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 12:14:48.546913 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 12:14:48.546925 | orchestrator | 2025-06-22 12:14:48.546936 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-06-22 12:14:48.546947 | orchestrator | Sunday 22 June 2025 12:10:50 +0000 (0:00:03.267) 0:00:55.116 *********** 2025-06-22 12:14:48.546958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:14:48.546970 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.546981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:14:48.546992 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.547038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:14:48.547058 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.547070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.547081 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.547093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.547104 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.547115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.547126 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.547137 | orchestrator | 2025-06-22 12:14:48.547148 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-06-22 12:14:48.547159 | orchestrator | Sunday 22 June 2025 12:10:55 +0000 (0:00:04.536) 0:00:59.652 *********** 2025-06-22 12:14:48.547170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:14:48.547188 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.547235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:14:48.547248 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.547259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:14:48.547271 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.547282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.547293 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.547304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.547316 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.547327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.547345 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.547356 | orchestrator | 2025-06-22 12:14:48.547367 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-06-22 12:14:48.547388 | orchestrator | Sunday 22 June 2025 12:11:00 +0000 (0:00:05.246) 0:01:04.898 *********** 2025-06-22 12:14:48.547399 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.547411 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.547421 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.547433 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.547443 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.547454 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.547465 | orchestrator | 2025-06-22 12:14:48.547476 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-06-22 12:14:48.547487 | orchestrator | Sunday 22 June 2025 12:11:05 +0000 (0:00:04.707) 0:01:09.606 *********** 2025-06-22 12:14:48.547498 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.547509 | orchestrator | 2025-06-22 12:14:48.547520 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-06-22 12:14:48.547530 | orchestrator | Sunday 22 June 2025 12:11:05 +0000 (0:00:00.258) 0:01:09.864 *********** 2025-06-22 12:14:48.547541 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.547552 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.547563 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.547573 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.547584 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.547595 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.547606 | orchestrator | 2025-06-22 12:14:48.547617 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-06-22 12:14:48.547628 | orchestrator | Sunday 22 June 2025 12:11:06 +0000 (0:00:01.344) 0:01:11.208 *********** 2025-06-22 12:14:48.547639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:14:48.547651 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.547662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.547680 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.547691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:14:48.547702 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.547728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:14:48.547741 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.547752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.547764 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.547805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.547817 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.547828 | orchestrator | 2025-06-22 12:14:48.547846 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-06-22 12:14:48.547857 | orchestrator | Sunday 22 June 2025 12:11:10 +0000 (0:00:03.421) 0:01:14.630 *********** 2025-06-22 12:14:48.547868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.547892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.547904 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 12:14:48.547916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.547928 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 12:14:48.547945 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 12:14:48.547956 | orchestrator | 2025-06-22 12:14:48.547967 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-06-22 12:14:48.547979 | orchestrator | Sunday 22 June 2025 12:11:14 +0000 (0:00:04.198) 0:01:18.828 *********** 2025-06-22 12:14:48.548003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.548024 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 12:14:48.548044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.548075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.548092 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 12:14:48.548132 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 12:14:48.548151 | orchestrator | 2025-06-22 12:14:48.548168 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-06-22 12:14:48.548184 | orchestrator | Sunday 22 June 2025 12:11:21 +0000 (0:00:07.055) 0:01:25.884 *********** 2025-06-22 12:14:48.548202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.548222 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.548240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.548269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.548288 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.548307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.548326 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.548364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.548385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.548415 | orchestrator | 2025-06-22 12:14:48.548435 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-06-22 12:14:48.548455 | orchestrator | Sunday 22 June 2025 12:11:26 +0000 (0:00:04.436) 0:01:30.321 *********** 2025-06-22 12:14:48.548474 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.548494 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.548514 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.548534 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:14:48.548553 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:14:48.548573 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:14:48.548592 | orchestrator | 2025-06-22 12:14:48.548613 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-06-22 12:14:48.548633 | orchestrator | Sunday 22 June 2025 12:11:29 +0000 (0:00:03.516) 0:01:33.838 *********** 2025-06-22 12:14:48.548653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.548673 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.548693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.548713 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.548751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.548796 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.548817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.548843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.548864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.548883 | orchestrator | 2025-06-22 12:14:48.548902 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-06-22 12:14:48.548921 | orchestrator | Sunday 22 June 2025 12:11:33 +0000 (0:00:04.278) 0:01:38.116 *********** 2025-06-22 12:14:48.548940 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.548958 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.548977 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.548996 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.549014 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.549034 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.549052 | orchestrator | 2025-06-22 12:14:48.549072 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-06-22 12:14:48.549090 | orchestrator | Sunday 22 June 2025 12:11:36 +0000 (0:00:03.001) 0:01:41.117 *********** 2025-06-22 12:14:48.549109 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.549128 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.549148 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.549167 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.549186 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.549205 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.549224 | orchestrator | 2025-06-22 12:14:48.549243 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-06-22 12:14:48.549261 | orchestrator | Sunday 22 June 2025 12:11:39 +0000 (0:00:02.518) 0:01:43.636 *********** 2025-06-22 12:14:48.549279 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.549295 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.549539 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.549582 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.549615 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.549633 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.549651 | orchestrator | 2025-06-22 12:14:48.549669 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-06-22 12:14:48.549687 | orchestrator | Sunday 22 June 2025 12:11:41 +0000 (0:00:02.434) 0:01:46.070 *********** 2025-06-22 12:14:48.549706 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.549723 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.549740 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.549757 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.549851 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.549873 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.549892 | orchestrator | 2025-06-22 12:14:48.549911 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-06-22 12:14:48.549930 | orchestrator | Sunday 22 June 2025 12:11:44 +0000 (0:00:02.234) 0:01:48.305 *********** 2025-06-22 12:14:48.549949 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.549967 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.549986 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.550004 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.550055 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.550075 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.550092 | orchestrator | 2025-06-22 12:14:48.550108 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-06-22 12:14:48.550124 | orchestrator | Sunday 22 June 2025 12:11:46 +0000 (0:00:02.183) 0:01:50.488 *********** 2025-06-22 12:14:48.550141 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.550159 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.550177 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.550195 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.550211 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.550227 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.550243 | orchestrator | 2025-06-22 12:14:48.550259 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-06-22 12:14:48.550277 | orchestrator | Sunday 22 June 2025 12:11:48 +0000 (0:00:02.398) 0:01:52.887 *********** 2025-06-22 12:14:48.550294 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 12:14:48.550312 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.550330 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 12:14:48.550346 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.550363 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 12:14:48.550381 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.550399 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 12:14:48.550417 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 12:14:48.550435 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.550453 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.550471 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 12:14:48.550489 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.550507 | orchestrator | 2025-06-22 12:14:48.550523 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-06-22 12:14:48.550541 | orchestrator | Sunday 22 June 2025 12:11:51 +0000 (0:00:03.343) 0:01:56.231 *********** 2025-06-22 12:14:48.550558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:14:48.550593 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.550633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:14:48.550651 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.550668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:14:48.550683 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.550700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.550717 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.550734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.550763 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.550808 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.550827 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.550844 | orchestrator | 2025-06-22 12:14:48.550861 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-06-22 12:14:48.550877 | orchestrator | Sunday 22 June 2025 12:11:54 +0000 (0:00:02.445) 0:01:58.677 *********** 2025-06-22 12:14:48.550914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:14:48.550932 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.550948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:14:48.550963 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.550979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:14:48.551005 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.551021 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.551036 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.551051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.551067 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.551097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.551114 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.551129 | orchestrator | 2025-06-22 12:14:48.551145 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-06-22 12:14:48.551162 | orchestrator | Sunday 22 June 2025 12:11:56 +0000 (0:00:02.478) 0:02:01.156 *********** 2025-06-22 12:14:48.551178 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.551194 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.551210 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.551227 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.551242 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.551258 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.551275 | orchestrator | 2025-06-22 12:14:48.551292 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-06-22 12:14:48.551309 | orchestrator | Sunday 22 June 2025 12:12:01 +0000 (0:00:04.574) 0:02:05.731 *********** 2025-06-22 12:14:48.551326 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.551344 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.551362 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.551379 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:14:48.551407 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:14:48.551425 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:14:48.551443 | orchestrator | 2025-06-22 12:14:48.551460 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-06-22 12:14:48.551478 | orchestrator | Sunday 22 June 2025 12:12:08 +0000 (0:00:07.104) 0:02:12.835 *********** 2025-06-22 12:14:48.551496 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.551513 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.551531 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.551548 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.551565 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.551582 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.551600 | orchestrator | 2025-06-22 12:14:48.551616 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-06-22 12:14:48.551632 | orchestrator | Sunday 22 June 2025 12:12:11 +0000 (0:00:02.639) 0:02:15.474 *********** 2025-06-22 12:14:48.551648 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.551665 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.551682 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.551698 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.551714 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.551730 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.551748 | orchestrator | 2025-06-22 12:14:48.551764 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-06-22 12:14:48.551858 | orchestrator | Sunday 22 June 2025 12:12:13 +0000 (0:00:02.421) 0:02:17.896 *********** 2025-06-22 12:14:48.551876 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.551892 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.551908 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.551924 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.551941 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.551958 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.551975 | orchestrator | 2025-06-22 12:14:48.551992 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-06-22 12:14:48.552009 | orchestrator | Sunday 22 June 2025 12:12:15 +0000 (0:00:01.898) 0:02:19.795 *********** 2025-06-22 12:14:48.552026 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.552044 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.552062 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.552079 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.552096 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.552113 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.552129 | orchestrator | 2025-06-22 12:14:48.552145 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-06-22 12:14:48.552162 | orchestrator | Sunday 22 June 2025 12:12:18 +0000 (0:00:03.240) 0:02:23.036 *********** 2025-06-22 12:14:48.552178 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.552191 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.552205 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.552218 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.552231 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.552245 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.552259 | orchestrator | 2025-06-22 12:14:48.552273 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-06-22 12:14:48.552287 | orchestrator | Sunday 22 June 2025 12:12:20 +0000 (0:00:02.192) 0:02:25.228 *********** 2025-06-22 12:14:48.552300 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.552313 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.552326 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.552340 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.552352 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.552365 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.552379 | orchestrator | 2025-06-22 12:14:48.552392 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-06-22 12:14:48.552425 | orchestrator | Sunday 22 June 2025 12:12:22 +0000 (0:00:01.927) 0:02:27.156 *********** 2025-06-22 12:14:48.552447 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.552474 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.552488 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.552501 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.552515 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.552528 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.552541 | orchestrator | 2025-06-22 12:14:48.552554 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-06-22 12:14:48.552568 | orchestrator | Sunday 22 June 2025 12:12:24 +0000 (0:00:02.084) 0:02:29.241 *********** 2025-06-22 12:14:48.552582 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.552596 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.552609 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.552623 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.552637 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.552650 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.552663 | orchestrator | 2025-06-22 12:14:48.552677 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-06-22 12:14:48.552691 | orchestrator | Sunday 22 June 2025 12:12:26 +0000 (0:00:01.647) 0:02:30.888 *********** 2025-06-22 12:14:48.552705 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 12:14:48.552719 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.552734 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 12:14:48.552748 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.552763 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 12:14:48.552806 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.552821 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 12:14:48.552836 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.552851 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 12:14:48.552866 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.552881 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 12:14:48.552896 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.552910 | orchestrator | 2025-06-22 12:14:48.552925 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-06-22 12:14:48.552938 | orchestrator | Sunday 22 June 2025 12:12:29 +0000 (0:00:02.498) 0:02:33.387 *********** 2025-06-22 12:14:48.552954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:14:48.552971 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.552987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:14:48.553013 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.553048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 12:14:48.553065 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.553080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.553096 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.553110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.553124 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.553138 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 12:14:48.553160 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.553174 | orchestrator | 2025-06-22 12:14:48.553186 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-06-22 12:14:48.553199 | orchestrator | Sunday 22 June 2025 12:12:31 +0000 (0:00:02.519) 0:02:35.907 *********** 2025-06-22 12:14:48.553213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.553240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.553255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 12:14:48.553269 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 12:14:48.553291 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 12:14:48.553305 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 12:14:48.553319 | orchestrator | 2025-06-22 12:14:48.553334 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-22 12:14:48.553358 | orchestrator | Sunday 22 June 2025 12:12:36 +0000 (0:00:04.753) 0:02:40.660 *********** 2025-06-22 12:14:48.553373 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:14:48.553387 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:14:48.553401 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:14:48.553415 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:14:48.553428 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:14:48.553440 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:14:48.553452 | orchestrator | 2025-06-22 12:14:48.553464 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-06-22 12:14:48.553477 | orchestrator | Sunday 22 June 2025 12:12:37 +0000 (0:00:00.647) 0:02:41.307 *********** 2025-06-22 12:14:48.553490 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:14:48.553504 | orchestrator | 2025-06-22 12:14:48.553518 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-06-22 12:14:48.553531 | orchestrator | Sunday 22 June 2025 12:12:39 +0000 (0:00:02.354) 0:02:43.661 *********** 2025-06-22 12:14:48.553546 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:14:48.553559 | orchestrator | 2025-06-22 12:14:48.553573 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-06-22 12:14:48.553588 | orchestrator | Sunday 22 June 2025 12:12:41 +0000 (0:00:02.416) 0:02:46.078 *********** 2025-06-22 12:14:48.553601 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:14:48.553615 | orchestrator | 2025-06-22 12:14:48.553628 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 12:14:48.553642 | orchestrator | Sunday 22 June 2025 12:13:23 +0000 (0:00:41.283) 0:03:27.361 *********** 2025-06-22 12:14:48.553656 | orchestrator | 2025-06-22 12:14:48.553670 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 12:14:48.553684 | orchestrator | Sunday 22 June 2025 12:13:23 +0000 (0:00:00.067) 0:03:27.429 *********** 2025-06-22 12:14:48.553700 | orchestrator | 2025-06-22 12:14:48.553713 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 12:14:48.553726 | orchestrator | Sunday 22 June 2025 12:13:23 +0000 (0:00:00.304) 0:03:27.733 *********** 2025-06-22 12:14:48.553740 | orchestrator | 2025-06-22 12:14:48.553753 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 12:14:48.553798 | orchestrator | Sunday 22 June 2025 12:13:23 +0000 (0:00:00.067) 0:03:27.801 *********** 2025-06-22 12:14:48.553814 | orchestrator | 2025-06-22 12:14:48.553828 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 12:14:48.553841 | orchestrator | Sunday 22 June 2025 12:13:23 +0000 (0:00:00.066) 0:03:27.868 *********** 2025-06-22 12:14:48.553854 | orchestrator | 2025-06-22 12:14:48.553867 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 12:14:48.553880 | orchestrator | Sunday 22 June 2025 12:13:23 +0000 (0:00:00.068) 0:03:27.936 *********** 2025-06-22 12:14:48.553894 | orchestrator | 2025-06-22 12:14:48.553907 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-06-22 12:14:48.553920 | orchestrator | Sunday 22 June 2025 12:13:23 +0000 (0:00:00.069) 0:03:28.006 *********** 2025-06-22 12:14:48.553934 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:14:48.553947 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:14:48.553961 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:14:48.553975 | orchestrator | 2025-06-22 12:14:48.553987 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-06-22 12:14:48.554001 | orchestrator | Sunday 22 June 2025 12:13:51 +0000 (0:00:27.899) 0:03:55.906 *********** 2025-06-22 12:14:48.554014 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:14:48.554066 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:14:48.554081 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:14:48.554095 | orchestrator | 2025-06-22 12:14:48.554110 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:14:48.554126 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-22 12:14:48.554141 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-22 12:14:48.554156 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-22 12:14:48.554168 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-22 12:14:48.554182 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-22 12:14:48.554196 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-22 12:14:48.554210 | orchestrator | 2025-06-22 12:14:48.554223 | orchestrator | 2025-06-22 12:14:48.554236 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:14:48.554248 | orchestrator | Sunday 22 June 2025 12:14:47 +0000 (0:00:55.728) 0:04:51.635 *********** 2025-06-22 12:14:48.554261 | orchestrator | =============================================================================== 2025-06-22 12:14:48.554274 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 55.73s 2025-06-22 12:14:48.554287 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.28s 2025-06-22 12:14:48.554301 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.90s 2025-06-22 12:14:48.554314 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.44s 2025-06-22 12:14:48.554351 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 7.10s 2025-06-22 12:14:48.554368 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.06s 2025-06-22 12:14:48.554383 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.76s 2025-06-22 12:14:48.554398 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 5.25s 2025-06-22 12:14:48.554424 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.75s 2025-06-22 12:14:48.554440 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 4.71s 2025-06-22 12:14:48.554454 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 4.57s 2025-06-22 12:14:48.554467 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 4.54s 2025-06-22 12:14:48.554482 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 4.44s 2025-06-22 12:14:48.554497 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.28s 2025-06-22 12:14:48.554511 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.20s 2025-06-22 12:14:48.554524 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.08s 2025-06-22 12:14:48.554539 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 4.05s 2025-06-22 12:14:48.554553 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.52s 2025-06-22 12:14:48.554568 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.51s 2025-06-22 12:14:48.554582 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.44s 2025-06-22 12:14:48.554595 | orchestrator | 2025-06-22 12:14:48 | INFO  | Task 3b6b5079-75d3-4d71-a761-0d07d1a69b11 is in state STARTED 2025-06-22 12:14:48.554609 | orchestrator | 2025-06-22 12:14:48 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:14:48.554623 | orchestrator | 2025-06-22 12:14:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:14:51.611084 | orchestrator | 2025-06-22 12:14:51 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:14:51.612047 | orchestrator | 2025-06-22 12:14:51 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:14:51.612510 | orchestrator | 2025-06-22 12:14:51 | INFO  | Task 3b6b5079-75d3-4d71-a761-0d07d1a69b11 is in state STARTED 2025-06-22 12:14:51.615541 | orchestrator | 2025-06-22 12:14:51 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:14:51.615587 | orchestrator | 2025-06-22 12:14:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:14:54.646360 | orchestrator | 2025-06-22 12:14:54 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:14:54.650139 | orchestrator | 2025-06-22 12:14:54 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:14:54.651293 | orchestrator | 2025-06-22 12:14:54 | INFO  | Task 3b6b5079-75d3-4d71-a761-0d07d1a69b11 is in state SUCCESS 2025-06-22 12:14:54.653315 | orchestrator | 2025-06-22 12:14:54 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:14:54.653345 | orchestrator | 2025-06-22 12:14:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:14:57.693220 | orchestrator | 2025-06-22 12:14:57 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:14:57.695628 | orchestrator | 2025-06-22 12:14:57 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:14:57.696956 | orchestrator | 2025-06-22 12:14:57 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:14:57.698268 | orchestrator | 2025-06-22 12:14:57 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:14:57.698294 | orchestrator | 2025-06-22 12:14:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:00.737127 | orchestrator | 2025-06-22 12:15:00 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:00.738579 | orchestrator | 2025-06-22 12:15:00 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:00.740060 | orchestrator | 2025-06-22 12:15:00 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:15:00.741451 | orchestrator | 2025-06-22 12:15:00 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:00.741476 | orchestrator | 2025-06-22 12:15:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:03.794921 | orchestrator | 2025-06-22 12:15:03 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:03.798267 | orchestrator | 2025-06-22 12:15:03 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:03.800378 | orchestrator | 2025-06-22 12:15:03 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:15:03.802137 | orchestrator | 2025-06-22 12:15:03 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:03.802160 | orchestrator | 2025-06-22 12:15:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:06.851866 | orchestrator | 2025-06-22 12:15:06 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:06.853931 | orchestrator | 2025-06-22 12:15:06 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:06.855972 | orchestrator | 2025-06-22 12:15:06 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:15:06.859457 | orchestrator | 2025-06-22 12:15:06 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:06.859486 | orchestrator | 2025-06-22 12:15:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:09.906402 | orchestrator | 2025-06-22 12:15:09 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:09.908263 | orchestrator | 2025-06-22 12:15:09 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:09.909921 | orchestrator | 2025-06-22 12:15:09 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:15:09.911696 | orchestrator | 2025-06-22 12:15:09 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:09.911740 | orchestrator | 2025-06-22 12:15:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:12.971895 | orchestrator | 2025-06-22 12:15:12 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:12.972867 | orchestrator | 2025-06-22 12:15:12 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:12.973947 | orchestrator | 2025-06-22 12:15:12 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:15:12.975740 | orchestrator | 2025-06-22 12:15:12 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:12.975816 | orchestrator | 2025-06-22 12:15:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:16.025989 | orchestrator | 2025-06-22 12:15:16 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:16.027571 | orchestrator | 2025-06-22 12:15:16 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:16.029728 | orchestrator | 2025-06-22 12:15:16 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:15:16.031272 | orchestrator | 2025-06-22 12:15:16 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:16.031319 | orchestrator | 2025-06-22 12:15:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:19.080411 | orchestrator | 2025-06-22 12:15:19 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:19.083427 | orchestrator | 2025-06-22 12:15:19 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:19.085079 | orchestrator | 2025-06-22 12:15:19 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state STARTED 2025-06-22 12:15:19.086548 | orchestrator | 2025-06-22 12:15:19 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:19.086590 | orchestrator | 2025-06-22 12:15:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:22.132267 | orchestrator | 2025-06-22 12:15:22 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:15:22.137066 | orchestrator | 2025-06-22 12:15:22 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:22.137178 | orchestrator | 2025-06-22 12:15:22 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:22.140575 | orchestrator | 2025-06-22 12:15:22 | INFO  | Task 3a30233b-b307-4373-83ec-a0a4b4cba5bf is in state SUCCESS 2025-06-22 12:15:22.142432 | orchestrator | 2025-06-22 12:15:22.142469 | orchestrator | 2025-06-22 12:15:22.142482 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:15:22.142493 | orchestrator | 2025-06-22 12:15:22.142505 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:15:22.142516 | orchestrator | Sunday 22 June 2025 12:14:23 +0000 (0:00:00.288) 0:00:00.288 *********** 2025-06-22 12:15:22.142528 | orchestrator | ok: [testbed-manager] 2025-06-22 12:15:22.142540 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:15:22.142551 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:15:22.142562 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:15:22.143081 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:15:22.143096 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:15:22.143107 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:15:22.143118 | orchestrator | 2025-06-22 12:15:22.143129 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:15:22.143140 | orchestrator | Sunday 22 June 2025 12:14:24 +0000 (0:00:01.034) 0:00:01.323 *********** 2025-06-22 12:15:22.143152 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-06-22 12:15:22.143164 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-06-22 12:15:22.143175 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-06-22 12:15:22.143185 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-06-22 12:15:22.143196 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-06-22 12:15:22.143213 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-06-22 12:15:22.143231 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-06-22 12:15:22.143250 | orchestrator | 2025-06-22 12:15:22.143267 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-22 12:15:22.143287 | orchestrator | 2025-06-22 12:15:22.143306 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-06-22 12:15:22.143322 | orchestrator | Sunday 22 June 2025 12:14:25 +0000 (0:00:00.742) 0:00:02.066 *********** 2025-06-22 12:15:22.143335 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:15:22.143347 | orchestrator | 2025-06-22 12:15:22.143358 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-06-22 12:15:22.143470 | orchestrator | Sunday 22 June 2025 12:14:27 +0000 (0:00:01.947) 0:00:04.014 *********** 2025-06-22 12:15:22.143482 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-06-22 12:15:22.143566 | orchestrator | 2025-06-22 12:15:22.143580 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-06-22 12:15:22.143591 | orchestrator | Sunday 22 June 2025 12:14:30 +0000 (0:00:03.199) 0:00:07.214 *********** 2025-06-22 12:15:22.143603 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-06-22 12:15:22.143615 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-06-22 12:15:22.143625 | orchestrator | 2025-06-22 12:15:22.143717 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-06-22 12:15:22.143733 | orchestrator | Sunday 22 June 2025 12:14:36 +0000 (0:00:05.856) 0:00:13.070 *********** 2025-06-22 12:15:22.143744 | orchestrator | ok: [testbed-manager] => (item=service) 2025-06-22 12:15:22.143807 | orchestrator | 2025-06-22 12:15:22.143820 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-06-22 12:15:22.143831 | orchestrator | Sunday 22 June 2025 12:14:39 +0000 (0:00:02.904) 0:00:15.974 *********** 2025-06-22 12:15:22.143842 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 12:15:22.143852 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-06-22 12:15:22.143863 | orchestrator | 2025-06-22 12:15:22.143874 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-06-22 12:15:22.143884 | orchestrator | Sunday 22 June 2025 12:14:42 +0000 (0:00:03.552) 0:00:19.527 *********** 2025-06-22 12:15:22.143895 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-06-22 12:15:22.143906 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-06-22 12:15:22.143916 | orchestrator | 2025-06-22 12:15:22.143927 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-06-22 12:15:22.143938 | orchestrator | Sunday 22 June 2025 12:14:48 +0000 (0:00:05.748) 0:00:25.275 *********** 2025-06-22 12:15:22.143948 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-06-22 12:15:22.143959 | orchestrator | 2025-06-22 12:15:22.143969 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:15:22.143980 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:15:22.143992 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:15:22.144003 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:15:22.144013 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:15:22.144024 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:15:22.144046 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:15:22.144058 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:15:22.144068 | orchestrator | 2025-06-22 12:15:22.144079 | orchestrator | 2025-06-22 12:15:22.144090 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:15:22.144117 | orchestrator | Sunday 22 June 2025 12:14:53 +0000 (0:00:05.080) 0:00:30.355 *********** 2025-06-22 12:15:22.144137 | orchestrator | =============================================================================== 2025-06-22 12:15:22.144148 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.86s 2025-06-22 12:15:22.144159 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.75s 2025-06-22 12:15:22.144169 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.08s 2025-06-22 12:15:22.144191 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.55s 2025-06-22 12:15:22.144202 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.20s 2025-06-22 12:15:22.144213 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.90s 2025-06-22 12:15:22.144223 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.95s 2025-06-22 12:15:22.144234 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.03s 2025-06-22 12:15:22.144245 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2025-06-22 12:15:22.144255 | orchestrator | 2025-06-22 12:15:22.144266 | orchestrator | 2025-06-22 12:15:22.144276 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:15:22.144287 | orchestrator | 2025-06-22 12:15:22.144297 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:15:22.144308 | orchestrator | Sunday 22 June 2025 12:13:31 +0000 (0:00:00.193) 0:00:00.193 *********** 2025-06-22 12:15:22.144319 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:15:22.144329 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:15:22.144340 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:15:22.144351 | orchestrator | 2025-06-22 12:15:22.144363 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:15:22.144375 | orchestrator | Sunday 22 June 2025 12:13:31 +0000 (0:00:00.243) 0:00:00.437 *********** 2025-06-22 12:15:22.144388 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-06-22 12:15:22.144400 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-06-22 12:15:22.144413 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-06-22 12:15:22.144425 | orchestrator | 2025-06-22 12:15:22.144438 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-06-22 12:15:22.144450 | orchestrator | 2025-06-22 12:15:22.144462 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-22 12:15:22.144474 | orchestrator | Sunday 22 June 2025 12:13:32 +0000 (0:00:00.457) 0:00:00.894 *********** 2025-06-22 12:15:22.144486 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:15:22.144499 | orchestrator | 2025-06-22 12:15:22.144512 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-06-22 12:15:22.144524 | orchestrator | Sunday 22 June 2025 12:13:32 +0000 (0:00:00.470) 0:00:01.365 *********** 2025-06-22 12:15:22.144536 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-06-22 12:15:22.144547 | orchestrator | 2025-06-22 12:15:22.144558 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-06-22 12:15:22.144568 | orchestrator | Sunday 22 June 2025 12:13:36 +0000 (0:00:03.420) 0:00:04.785 *********** 2025-06-22 12:15:22.144579 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-06-22 12:15:22.144590 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-06-22 12:15:22.144600 | orchestrator | 2025-06-22 12:15:22.144611 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-06-22 12:15:22.144622 | orchestrator | Sunday 22 June 2025 12:13:42 +0000 (0:00:06.601) 0:00:11.387 *********** 2025-06-22 12:15:22.144632 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 12:15:22.144643 | orchestrator | 2025-06-22 12:15:22.144653 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-06-22 12:15:22.144664 | orchestrator | Sunday 22 June 2025 12:13:46 +0000 (0:00:03.662) 0:00:15.050 *********** 2025-06-22 12:15:22.144674 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 12:15:22.144685 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-06-22 12:15:22.144696 | orchestrator | 2025-06-22 12:15:22.144706 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-06-22 12:15:22.144724 | orchestrator | Sunday 22 June 2025 12:13:50 +0000 (0:00:04.433) 0:00:19.483 *********** 2025-06-22 12:15:22.144735 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 12:15:22.144746 | orchestrator | 2025-06-22 12:15:22.144787 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-06-22 12:15:22.144798 | orchestrator | Sunday 22 June 2025 12:13:54 +0000 (0:00:03.424) 0:00:22.907 *********** 2025-06-22 12:15:22.144809 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-06-22 12:15:22.144820 | orchestrator | 2025-06-22 12:15:22.144830 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-06-22 12:15:22.144841 | orchestrator | Sunday 22 June 2025 12:13:58 +0000 (0:00:04.026) 0:00:26.933 *********** 2025-06-22 12:15:22.144851 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:15:22.144862 | orchestrator | 2025-06-22 12:15:22.144872 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-06-22 12:15:22.144892 | orchestrator | Sunday 22 June 2025 12:14:01 +0000 (0:00:03.431) 0:00:30.365 *********** 2025-06-22 12:15:22.144903 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:15:22.144915 | orchestrator | 2025-06-22 12:15:22.144925 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-06-22 12:15:22.144936 | orchestrator | Sunday 22 June 2025 12:14:05 +0000 (0:00:04.125) 0:00:34.490 *********** 2025-06-22 12:15:22.144947 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:15:22.144957 | orchestrator | 2025-06-22 12:15:22.144968 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-06-22 12:15:22.144984 | orchestrator | Sunday 22 June 2025 12:14:09 +0000 (0:00:03.736) 0:00:38.227 *********** 2025-06-22 12:15:22.144999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:15:22.145015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:15:22.145027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:15:22.145048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:15:22.145073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:15:22.145085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:15:22.145097 | orchestrator | 2025-06-22 12:15:22.145108 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-06-22 12:15:22.145119 | orchestrator | Sunday 22 June 2025 12:14:11 +0000 (0:00:01.529) 0:00:39.757 *********** 2025-06-22 12:15:22.145130 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:15:22.145141 | orchestrator | 2025-06-22 12:15:22.145152 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-06-22 12:15:22.145162 | orchestrator | Sunday 22 June 2025 12:14:11 +0000 (0:00:00.114) 0:00:39.872 *********** 2025-06-22 12:15:22.145173 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:15:22.145184 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:15:22.145195 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:15:22.145205 | orchestrator | 2025-06-22 12:15:22.145216 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-06-22 12:15:22.145227 | orchestrator | Sunday 22 June 2025 12:14:11 +0000 (0:00:00.477) 0:00:40.349 *********** 2025-06-22 12:15:22.145237 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 12:15:22.145248 | orchestrator | 2025-06-22 12:15:22.145259 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-06-22 12:15:22.145277 | orchestrator | Sunday 22 June 2025 12:14:12 +0000 (0:00:00.843) 0:00:41.192 *********** 2025-06-22 12:15:22.145288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:15:22.145300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:15:22.145324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:15:22.145337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:15:22.145348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:15:22.145368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:15:22.145379 | orchestrator | 2025-06-22 12:15:22.145390 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-06-22 12:15:22.145401 | orchestrator | Sunday 22 June 2025 12:14:14 +0000 (0:00:02.366) 0:00:43.558 *********** 2025-06-22 12:15:22.145412 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:15:22.145423 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:15:22.145434 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:15:22.145444 | orchestrator | 2025-06-22 12:15:22.145455 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-22 12:15:22.145466 | orchestrator | Sunday 22 June 2025 12:14:15 +0000 (0:00:00.294) 0:00:43.853 *********** 2025-06-22 12:15:22.145476 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:15:22.145487 | orchestrator | 2025-06-22 12:15:22.145498 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-06-22 12:15:22.145509 | orchestrator | Sunday 22 June 2025 12:14:15 +0000 (0:00:00.678) 0:00:44.531 *********** 2025-06-22 12:15:22.145533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:15:22.145546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:15:22.145564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:15:22.145576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:15:22.145588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:15:22.145610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:15:22.145622 | orchestrator | 2025-06-22 12:15:22.145633 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-06-22 12:15:22.145643 | orchestrator | Sunday 22 June 2025 12:14:18 +0000 (0:00:02.325) 0:00:46.857 *********** 2025-06-22 12:15:22.145655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 12:15:22.145680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:15:22.145691 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:15:22.145703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 12:15:22.145720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:15:22.145732 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:15:22.145748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 12:15:22.145821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:15:22.145844 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:15:22.145855 | orchestrator | 2025-06-22 12:15:22.145866 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-06-22 12:15:22.145877 | orchestrator | Sunday 22 June 2025 12:14:18 +0000 (0:00:00.565) 0:00:47.423 *********** 2025-06-22 12:15:22.145888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 12:15:22.145900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:15:22.145911 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:15:22.145937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 12:15:22.145948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:15:22.145965 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:15:22.145975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 12:15:22.145985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:15:22.145995 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:15:22.146004 | orchestrator | 2025-06-22 12:15:22.146045 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-06-22 12:15:22.146057 | orchestrator | Sunday 22 June 2025 12:14:20 +0000 (0:00:01.267) 0:00:48.690 *********** 2025-06-22 12:15:22.146067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:15:22.146089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:15:22.146107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:15:22.146117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:15:22.146127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:15:22.146137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:15:22.146147 | orchestrator | 2025-06-22 12:15:22.146162 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-06-22 12:15:22.146172 | orchestrator | Sunday 22 June 2025 12:14:22 +0000 (0:00:02.486) 0:00:51.177 *********** 2025-06-22 12:15:22.146187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:15:22.146204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:15:22.146215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:15:22.146225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:15:22.146241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:15:22.146262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:15:22.146272 | orchestrator | 2025-06-22 12:15:22.146282 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-06-22 12:15:22.146291 | orchestrator | Sunday 22 June 2025 12:14:27 +0000 (0:00:05.194) 0:00:56.371 *********** 2025-06-22 12:15:22.146301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 12:15:22.146311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:15:22.146321 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:15:22.146332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 12:15:22.146353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:15:22.146369 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:15:22.146379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 12:15:22.146390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:15:22.146400 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:15:22.146409 | orchestrator | 2025-06-22 12:15:22.146419 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-06-22 12:15:22.146429 | orchestrator | Sunday 22 June 2025 12:14:28 +0000 (0:00:00.886) 0:00:57.258 *********** 2025-06-22 12:15:22.146438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:15:22.146454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:15:22.146475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 12:15:22.146485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:15:22.146496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:15:22.146506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:15:22.146516 | orchestrator | 2025-06-22 12:15:22.146525 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-22 12:15:22.146535 | orchestrator | Sunday 22 June 2025 12:14:30 +0000 (0:00:02.359) 0:00:59.617 *********** 2025-06-22 12:15:22.146545 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:15:22.146563 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:15:22.146573 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:15:22.146583 | orchestrator | 2025-06-22 12:15:22.146592 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-06-22 12:15:22.146602 | orchestrator | Sunday 22 June 2025 12:14:31 +0000 (0:00:00.353) 0:00:59.971 *********** 2025-06-22 12:15:22.146611 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:15:22.146621 | orchestrator | 2025-06-22 12:15:22.146631 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-06-22 12:15:22.146640 | orchestrator | Sunday 22 June 2025 12:14:33 +0000 (0:00:02.228) 0:01:02.199 *********** 2025-06-22 12:15:22.146650 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:15:22.146659 | orchestrator | 2025-06-22 12:15:22.146669 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-06-22 12:15:22.146684 | orchestrator | Sunday 22 June 2025 12:14:35 +0000 (0:00:02.331) 0:01:04.530 *********** 2025-06-22 12:15:22.146694 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:15:22.146703 | orchestrator | 2025-06-22 12:15:22.146713 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-22 12:15:22.146723 | orchestrator | Sunday 22 June 2025 12:14:50 +0000 (0:00:15.002) 0:01:19.533 *********** 2025-06-22 12:15:22.146732 | orchestrator | 2025-06-22 12:15:22.146742 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-22 12:15:22.146771 | orchestrator | Sunday 22 June 2025 12:14:50 +0000 (0:00:00.073) 0:01:19.607 *********** 2025-06-22 12:15:22.146781 | orchestrator | 2025-06-22 12:15:22.146791 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-22 12:15:22.146800 | orchestrator | Sunday 22 June 2025 12:14:50 +0000 (0:00:00.064) 0:01:19.672 *********** 2025-06-22 12:15:22.146810 | orchestrator | 2025-06-22 12:15:22.146819 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-06-22 12:15:22.146829 | orchestrator | Sunday 22 June 2025 12:14:51 +0000 (0:00:00.066) 0:01:19.738 *********** 2025-06-22 12:15:22.146838 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:15:22.146848 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:15:22.146857 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:15:22.146867 | orchestrator | 2025-06-22 12:15:22.146876 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-06-22 12:15:22.146886 | orchestrator | Sunday 22 June 2025 12:15:09 +0000 (0:00:18.851) 0:01:38.589 *********** 2025-06-22 12:15:22.146896 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:15:22.146905 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:15:22.146915 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:15:22.146924 | orchestrator | 2025-06-22 12:15:22.146934 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:15:22.146944 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 12:15:22.146954 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 12:15:22.146963 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 12:15:22.146973 | orchestrator | 2025-06-22 12:15:22.146982 | orchestrator | 2025-06-22 12:15:22.146992 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:15:22.147002 | orchestrator | Sunday 22 June 2025 12:15:19 +0000 (0:00:09.696) 0:01:48.286 *********** 2025-06-22 12:15:22.147011 | orchestrator | =============================================================================== 2025-06-22 12:15:22.147020 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.85s 2025-06-22 12:15:22.147030 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.00s 2025-06-22 12:15:22.147039 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.70s 2025-06-22 12:15:22.147054 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.60s 2025-06-22 12:15:22.147064 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.19s 2025-06-22 12:15:22.147073 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.43s 2025-06-22 12:15:22.147083 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.13s 2025-06-22 12:15:22.147092 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.03s 2025-06-22 12:15:22.147102 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.74s 2025-06-22 12:15:22.147111 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.66s 2025-06-22 12:15:22.147121 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.43s 2025-06-22 12:15:22.147130 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.42s 2025-06-22 12:15:22.147140 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.42s 2025-06-22 12:15:22.147149 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.49s 2025-06-22 12:15:22.147159 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.37s 2025-06-22 12:15:22.147168 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.36s 2025-06-22 12:15:22.147178 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.33s 2025-06-22 12:15:22.147187 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.33s 2025-06-22 12:15:22.147197 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.23s 2025-06-22 12:15:22.147206 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.53s 2025-06-22 12:15:22.147216 | orchestrator | 2025-06-22 12:15:22 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:22.147226 | orchestrator | 2025-06-22 12:15:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:25.185837 | orchestrator | 2025-06-22 12:15:25 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:15:25.186090 | orchestrator | 2025-06-22 12:15:25 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:25.187047 | orchestrator | 2025-06-22 12:15:25 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:25.187847 | orchestrator | 2025-06-22 12:15:25 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:25.187943 | orchestrator | 2025-06-22 12:15:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:28.220182 | orchestrator | 2025-06-22 12:15:28 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:15:28.222096 | orchestrator | 2025-06-22 12:15:28 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:28.225377 | orchestrator | 2025-06-22 12:15:28 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:28.228374 | orchestrator | 2025-06-22 12:15:28 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:28.228412 | orchestrator | 2025-06-22 12:15:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:31.279272 | orchestrator | 2025-06-22 12:15:31 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:15:31.279896 | orchestrator | 2025-06-22 12:15:31 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:31.280851 | orchestrator | 2025-06-22 12:15:31 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:31.284640 | orchestrator | 2025-06-22 12:15:31 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:31.284683 | orchestrator | 2025-06-22 12:15:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:34.322183 | orchestrator | 2025-06-22 12:15:34 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:15:34.323993 | orchestrator | 2025-06-22 12:15:34 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:34.324300 | orchestrator | 2025-06-22 12:15:34 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:34.326056 | orchestrator | 2025-06-22 12:15:34 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:34.327071 | orchestrator | 2025-06-22 12:15:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:37.354482 | orchestrator | 2025-06-22 12:15:37 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:15:37.354699 | orchestrator | 2025-06-22 12:15:37 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:37.355633 | orchestrator | 2025-06-22 12:15:37 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:37.356239 | orchestrator | 2025-06-22 12:15:37 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:37.356343 | orchestrator | 2025-06-22 12:15:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:40.387102 | orchestrator | 2025-06-22 12:15:40 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:15:40.387294 | orchestrator | 2025-06-22 12:15:40 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:40.387858 | orchestrator | 2025-06-22 12:15:40 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:40.388563 | orchestrator | 2025-06-22 12:15:40 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:40.388587 | orchestrator | 2025-06-22 12:15:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:43.427189 | orchestrator | 2025-06-22 12:15:43 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:15:43.429497 | orchestrator | 2025-06-22 12:15:43 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:43.431904 | orchestrator | 2025-06-22 12:15:43 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:43.434189 | orchestrator | 2025-06-22 12:15:43 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:43.434859 | orchestrator | 2025-06-22 12:15:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:46.471523 | orchestrator | 2025-06-22 12:15:46 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:15:46.471863 | orchestrator | 2025-06-22 12:15:46 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:46.472569 | orchestrator | 2025-06-22 12:15:46 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:46.473245 | orchestrator | 2025-06-22 12:15:46 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:46.473268 | orchestrator | 2025-06-22 12:15:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:49.501805 | orchestrator | 2025-06-22 12:15:49 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:15:49.501908 | orchestrator | 2025-06-22 12:15:49 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:49.502547 | orchestrator | 2025-06-22 12:15:49 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:49.503390 | orchestrator | 2025-06-22 12:15:49 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:49.503415 | orchestrator | 2025-06-22 12:15:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:52.539137 | orchestrator | 2025-06-22 12:15:52 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:15:52.539423 | orchestrator | 2025-06-22 12:15:52 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:52.540050 | orchestrator | 2025-06-22 12:15:52 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:52.540811 | orchestrator | 2025-06-22 12:15:52 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:52.540840 | orchestrator | 2025-06-22 12:15:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:55.582329 | orchestrator | 2025-06-22 12:15:55 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:15:55.584699 | orchestrator | 2025-06-22 12:15:55 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:55.585304 | orchestrator | 2025-06-22 12:15:55 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:55.586289 | orchestrator | 2025-06-22 12:15:55 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:55.586322 | orchestrator | 2025-06-22 12:15:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:15:58.628695 | orchestrator | 2025-06-22 12:15:58 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:15:58.628836 | orchestrator | 2025-06-22 12:15:58 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:15:58.628863 | orchestrator | 2025-06-22 12:15:58 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:15:58.629269 | orchestrator | 2025-06-22 12:15:58 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:15:58.629292 | orchestrator | 2025-06-22 12:15:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:01.664122 | orchestrator | 2025-06-22 12:16:01 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:01.664880 | orchestrator | 2025-06-22 12:16:01 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:01.665531 | orchestrator | 2025-06-22 12:16:01 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:01.666498 | orchestrator | 2025-06-22 12:16:01 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:01.666540 | orchestrator | 2025-06-22 12:16:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:04.699129 | orchestrator | 2025-06-22 12:16:04 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:04.699339 | orchestrator | 2025-06-22 12:16:04 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:04.699848 | orchestrator | 2025-06-22 12:16:04 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:04.700492 | orchestrator | 2025-06-22 12:16:04 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:04.700519 | orchestrator | 2025-06-22 12:16:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:07.732107 | orchestrator | 2025-06-22 12:16:07 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:07.732196 | orchestrator | 2025-06-22 12:16:07 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:07.732551 | orchestrator | 2025-06-22 12:16:07 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:07.733170 | orchestrator | 2025-06-22 12:16:07 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:07.733196 | orchestrator | 2025-06-22 12:16:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:10.766066 | orchestrator | 2025-06-22 12:16:10 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:10.766153 | orchestrator | 2025-06-22 12:16:10 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:10.766623 | orchestrator | 2025-06-22 12:16:10 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:10.767204 | orchestrator | 2025-06-22 12:16:10 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:10.767338 | orchestrator | 2025-06-22 12:16:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:13.814278 | orchestrator | 2025-06-22 12:16:13 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:13.815133 | orchestrator | 2025-06-22 12:16:13 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:13.816560 | orchestrator | 2025-06-22 12:16:13 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:13.817811 | orchestrator | 2025-06-22 12:16:13 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:13.818179 | orchestrator | 2025-06-22 12:16:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:16.868652 | orchestrator | 2025-06-22 12:16:16 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:16.870441 | orchestrator | 2025-06-22 12:16:16 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:16.873503 | orchestrator | 2025-06-22 12:16:16 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:16.876161 | orchestrator | 2025-06-22 12:16:16 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:16.876514 | orchestrator | 2025-06-22 12:16:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:19.926389 | orchestrator | 2025-06-22 12:16:19 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:19.927910 | orchestrator | 2025-06-22 12:16:19 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:19.929868 | orchestrator | 2025-06-22 12:16:19 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:19.933047 | orchestrator | 2025-06-22 12:16:19 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:19.933085 | orchestrator | 2025-06-22 12:16:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:22.975013 | orchestrator | 2025-06-22 12:16:22 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:22.976375 | orchestrator | 2025-06-22 12:16:22 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:22.978620 | orchestrator | 2025-06-22 12:16:22 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:22.981809 | orchestrator | 2025-06-22 12:16:22 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:22.981934 | orchestrator | 2025-06-22 12:16:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:26.036342 | orchestrator | 2025-06-22 12:16:26 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:26.037447 | orchestrator | 2025-06-22 12:16:26 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:26.040351 | orchestrator | 2025-06-22 12:16:26 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:26.042352 | orchestrator | 2025-06-22 12:16:26 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:26.042563 | orchestrator | 2025-06-22 12:16:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:29.092912 | orchestrator | 2025-06-22 12:16:29 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:29.095841 | orchestrator | 2025-06-22 12:16:29 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:29.101953 | orchestrator | 2025-06-22 12:16:29 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:29.108851 | orchestrator | 2025-06-22 12:16:29 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:29.108974 | orchestrator | 2025-06-22 12:16:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:32.157018 | orchestrator | 2025-06-22 12:16:32 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:32.159662 | orchestrator | 2025-06-22 12:16:32 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:32.161217 | orchestrator | 2025-06-22 12:16:32 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:32.162932 | orchestrator | 2025-06-22 12:16:32 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:32.162964 | orchestrator | 2025-06-22 12:16:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:35.209211 | orchestrator | 2025-06-22 12:16:35 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:35.211004 | orchestrator | 2025-06-22 12:16:35 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:35.212962 | orchestrator | 2025-06-22 12:16:35 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:35.214895 | orchestrator | 2025-06-22 12:16:35 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:35.214926 | orchestrator | 2025-06-22 12:16:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:38.260820 | orchestrator | 2025-06-22 12:16:38 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:38.262275 | orchestrator | 2025-06-22 12:16:38 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:38.263897 | orchestrator | 2025-06-22 12:16:38 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:38.265661 | orchestrator | 2025-06-22 12:16:38 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:38.266161 | orchestrator | 2025-06-22 12:16:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:41.311081 | orchestrator | 2025-06-22 12:16:41 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:41.313310 | orchestrator | 2025-06-22 12:16:41 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:41.315145 | orchestrator | 2025-06-22 12:16:41 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:41.317458 | orchestrator | 2025-06-22 12:16:41 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:41.317661 | orchestrator | 2025-06-22 12:16:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:44.366975 | orchestrator | 2025-06-22 12:16:44 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:44.368880 | orchestrator | 2025-06-22 12:16:44 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:44.371310 | orchestrator | 2025-06-22 12:16:44 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:44.373302 | orchestrator | 2025-06-22 12:16:44 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:44.373392 | orchestrator | 2025-06-22 12:16:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:47.407496 | orchestrator | 2025-06-22 12:16:47 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:47.407603 | orchestrator | 2025-06-22 12:16:47 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:47.408339 | orchestrator | 2025-06-22 12:16:47 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:47.409467 | orchestrator | 2025-06-22 12:16:47 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:47.409495 | orchestrator | 2025-06-22 12:16:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:50.459561 | orchestrator | 2025-06-22 12:16:50 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:50.460954 | orchestrator | 2025-06-22 12:16:50 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:50.462148 | orchestrator | 2025-06-22 12:16:50 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:50.464479 | orchestrator | 2025-06-22 12:16:50 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:50.464949 | orchestrator | 2025-06-22 12:16:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:53.510758 | orchestrator | 2025-06-22 12:16:53 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:53.511598 | orchestrator | 2025-06-22 12:16:53 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:53.513002 | orchestrator | 2025-06-22 12:16:53 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:53.514189 | orchestrator | 2025-06-22 12:16:53 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:53.514356 | orchestrator | 2025-06-22 12:16:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:56.554890 | orchestrator | 2025-06-22 12:16:56 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:56.555091 | orchestrator | 2025-06-22 12:16:56 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:56.555988 | orchestrator | 2025-06-22 12:16:56 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:56.556653 | orchestrator | 2025-06-22 12:16:56 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:56.557831 | orchestrator | 2025-06-22 12:16:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:16:59.588895 | orchestrator | 2025-06-22 12:16:59 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:16:59.589251 | orchestrator | 2025-06-22 12:16:59 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:16:59.589972 | orchestrator | 2025-06-22 12:16:59 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:16:59.591038 | orchestrator | 2025-06-22 12:16:59 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:16:59.591115 | orchestrator | 2025-06-22 12:16:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:17:02.639719 | orchestrator | 2025-06-22 12:17:02 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:17:02.642881 | orchestrator | 2025-06-22 12:17:02 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:17:02.646207 | orchestrator | 2025-06-22 12:17:02 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:17:02.648024 | orchestrator | 2025-06-22 12:17:02 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:17:02.648048 | orchestrator | 2025-06-22 12:17:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:17:05.695326 | orchestrator | 2025-06-22 12:17:05 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:17:05.696385 | orchestrator | 2025-06-22 12:17:05 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:17:05.696945 | orchestrator | 2025-06-22 12:17:05 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:17:05.697558 | orchestrator | 2025-06-22 12:17:05 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:17:05.697789 | orchestrator | 2025-06-22 12:17:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:17:08.728484 | orchestrator | 2025-06-22 12:17:08 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:17:08.728570 | orchestrator | 2025-06-22 12:17:08 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:17:08.728647 | orchestrator | 2025-06-22 12:17:08 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:17:08.729434 | orchestrator | 2025-06-22 12:17:08 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:17:08.729457 | orchestrator | 2025-06-22 12:17:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:17:11.770879 | orchestrator | 2025-06-22 12:17:11 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:17:11.772548 | orchestrator | 2025-06-22 12:17:11 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:17:11.774645 | orchestrator | 2025-06-22 12:17:11 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:17:11.776011 | orchestrator | 2025-06-22 12:17:11 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:17:11.776054 | orchestrator | 2025-06-22 12:17:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:17:14.818600 | orchestrator | 2025-06-22 12:17:14 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:17:14.819822 | orchestrator | 2025-06-22 12:17:14 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:17:14.822946 | orchestrator | 2025-06-22 12:17:14 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:17:14.825560 | orchestrator | 2025-06-22 12:17:14 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:17:14.825814 | orchestrator | 2025-06-22 12:17:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:17:17.878848 | orchestrator | 2025-06-22 12:17:17 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:17:17.881272 | orchestrator | 2025-06-22 12:17:17 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state STARTED 2025-06-22 12:17:17.883401 | orchestrator | 2025-06-22 12:17:17 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:17:17.884633 | orchestrator | 2025-06-22 12:17:17 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:17:17.887184 | orchestrator | 2025-06-22 12:17:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:17:20.931306 | orchestrator | 2025-06-22 12:17:20 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:17:20.932959 | orchestrator | 2025-06-22 12:17:20 | INFO  | Task 8df36d77-9c3c-4ab2-920d-47ef1cb450a1 is in state SUCCESS 2025-06-22 12:17:20.935186 | orchestrator | 2025-06-22 12:17:20.935316 | orchestrator | 2025-06-22 12:17:20.935335 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:17:20.935348 | orchestrator | 2025-06-22 12:17:20.935360 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:17:20.935373 | orchestrator | Sunday 22 June 2025 12:14:51 +0000 (0:00:00.286) 0:00:00.286 *********** 2025-06-22 12:17:20.935384 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:17:20.935397 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:17:20.935408 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:17:20.935420 | orchestrator | 2025-06-22 12:17:20.935431 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:17:20.935443 | orchestrator | Sunday 22 June 2025 12:14:52 +0000 (0:00:00.396) 0:00:00.683 *********** 2025-06-22 12:17:20.935454 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-06-22 12:17:20.935466 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-06-22 12:17:20.935478 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-06-22 12:17:20.935489 | orchestrator | 2025-06-22 12:17:20.935500 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-06-22 12:17:20.935512 | orchestrator | 2025-06-22 12:17:20.935523 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-22 12:17:20.935534 | orchestrator | Sunday 22 June 2025 12:14:53 +0000 (0:00:00.816) 0:00:01.500 *********** 2025-06-22 12:17:20.935546 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:17:20.935557 | orchestrator | 2025-06-22 12:17:20.935569 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-06-22 12:17:20.935580 | orchestrator | Sunday 22 June 2025 12:14:53 +0000 (0:00:00.700) 0:00:02.200 *********** 2025-06-22 12:17:20.935591 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-06-22 12:17:20.935603 | orchestrator | 2025-06-22 12:17:20.935614 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-06-22 12:17:20.935625 | orchestrator | Sunday 22 June 2025 12:14:57 +0000 (0:00:03.717) 0:00:05.917 *********** 2025-06-22 12:17:20.935637 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-06-22 12:17:20.935670 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-06-22 12:17:20.935682 | orchestrator | 2025-06-22 12:17:20.935694 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-06-22 12:17:20.935705 | orchestrator | Sunday 22 June 2025 12:15:04 +0000 (0:00:06.761) 0:00:12.678 *********** 2025-06-22 12:17:20.935716 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 12:17:20.935728 | orchestrator | 2025-06-22 12:17:20.935765 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-06-22 12:17:20.935779 | orchestrator | Sunday 22 June 2025 12:15:07 +0000 (0:00:03.273) 0:00:15.952 *********** 2025-06-22 12:17:20.935792 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 12:17:20.935805 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-06-22 12:17:20.935818 | orchestrator | 2025-06-22 12:17:20.935830 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-06-22 12:17:20.935842 | orchestrator | Sunday 22 June 2025 12:15:11 +0000 (0:00:03.907) 0:00:19.859 *********** 2025-06-22 12:17:20.935854 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 12:17:20.935867 | orchestrator | 2025-06-22 12:17:20.935879 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-06-22 12:17:20.935892 | orchestrator | Sunday 22 June 2025 12:15:14 +0000 (0:00:03.280) 0:00:23.140 *********** 2025-06-22 12:17:20.935905 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-06-22 12:17:20.935918 | orchestrator | 2025-06-22 12:17:20.935930 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-06-22 12:17:20.935943 | orchestrator | Sunday 22 June 2025 12:15:18 +0000 (0:00:03.896) 0:00:27.037 *********** 2025-06-22 12:17:20.935995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 12:17:20.936016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 12:17:20.936044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 12:17:20.936059 | orchestrator | 2025-06-22 12:17:20.936071 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-22 12:17:20.936083 | orchestrator | Sunday 22 June 2025 12:15:22 +0000 (0:00:03.409) 0:00:30.447 *********** 2025-06-22 12:17:20.936102 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:17:20.936114 | orchestrator | 2025-06-22 12:17:20.936124 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-06-22 12:17:20.936135 | orchestrator | Sunday 22 June 2025 12:15:22 +0000 (0:00:00.687) 0:00:31.134 *********** 2025-06-22 12:17:20.936145 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:17:20.936156 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:17:20.936167 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:17:20.936177 | orchestrator | 2025-06-22 12:17:20.936188 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-06-22 12:17:20.936198 | orchestrator | Sunday 22 June 2025 12:15:26 +0000 (0:00:03.353) 0:00:34.488 *********** 2025-06-22 12:17:20.936209 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 12:17:20.936220 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 12:17:20.936231 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 12:17:20.936241 | orchestrator | 2025-06-22 12:17:20.936252 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-06-22 12:17:20.936269 | orchestrator | Sunday 22 June 2025 12:15:27 +0000 (0:00:01.587) 0:00:36.075 *********** 2025-06-22 12:17:20.936313 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 12:17:20.936325 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 12:17:20.936336 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 12:17:20.936347 | orchestrator | 2025-06-22 12:17:20.936357 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-06-22 12:17:20.936368 | orchestrator | Sunday 22 June 2025 12:15:28 +0000 (0:00:01.153) 0:00:37.229 *********** 2025-06-22 12:17:20.936379 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:17:20.936389 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:17:20.936400 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:17:20.936411 | orchestrator | 2025-06-22 12:17:20.936421 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-06-22 12:17:20.936432 | orchestrator | Sunday 22 June 2025 12:15:29 +0000 (0:00:00.907) 0:00:38.137 *********** 2025-06-22 12:17:20.936442 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:20.936453 | orchestrator | 2025-06-22 12:17:20.936463 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-06-22 12:17:20.936474 | orchestrator | Sunday 22 June 2025 12:15:30 +0000 (0:00:00.234) 0:00:38.371 *********** 2025-06-22 12:17:20.936485 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:20.936495 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:20.936506 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:20.936516 | orchestrator | 2025-06-22 12:17:20.936527 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-22 12:17:20.936538 | orchestrator | Sunday 22 June 2025 12:15:30 +0000 (0:00:00.442) 0:00:38.813 *********** 2025-06-22 12:17:20.936548 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:17:20.936559 | orchestrator | 2025-06-22 12:17:20.936570 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-06-22 12:17:20.936580 | orchestrator | Sunday 22 June 2025 12:15:30 +0000 (0:00:00.542) 0:00:39.356 *********** 2025-06-22 12:17:20.936604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 12:17:20.936629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 12:17:20.936689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 12:17:20.936704 | orchestrator | 2025-06-22 12:17:20.936715 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-06-22 12:17:20.936726 | orchestrator | Sunday 22 June 2025 12:15:35 +0000 (0:00:04.365) 0:00:43.722 *********** 2025-06-22 12:17:20.936746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 12:17:20.936767 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:20.936779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 12:17:20.936791 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:20.936816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 12:17:20.936836 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:20.936847 | orchestrator | 2025-06-22 12:17:20.936858 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-06-22 12:17:20.936869 | orchestrator | Sunday 22 June 2025 12:15:38 +0000 (0:00:02.883) 0:00:46.605 *********** 2025-06-22 12:17:20.936880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 12:17:20.936892 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:20.936914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 12:17:20.936934 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:20.936945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 12:17:20.936957 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:20.936968 | orchestrator | 2025-06-22 12:17:20.936979 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-06-22 12:17:20.936990 | orchestrator | Sunday 22 June 2025 12:15:41 +0000 (0:00:03.160) 0:00:49.766 *********** 2025-06-22 12:17:20.937000 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:20.937011 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:20.937022 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:20.937033 | orchestrator | 2025-06-22 12:17:20.937043 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-06-22 12:17:20.937054 | orchestrator | Sunday 22 June 2025 12:15:44 +0000 (0:00:02.902) 0:00:52.669 *********** 2025-06-22 12:17:20.937077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 12:17:20.937103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 12:17:20.937120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 12:17:20.937139 | orchestrator | 2025-06-22 12:17:20.937150 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-06-22 12:17:20.937160 | orchestrator | Sunday 22 June 2025 12:15:47 +0000 (0:00:03.603) 0:00:56.272 *********** 2025-06-22 12:17:20.937171 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:17:20.937182 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:17:20.937193 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:17:20.937203 | orchestrator | 2025-06-22 12:17:20.937214 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-06-22 12:17:20.937389 | orchestrator | Sunday 22 June 2025 12:15:53 +0000 (0:00:05.768) 0:01:02.041 *********** 2025-06-22 12:17:20.937404 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:20.937415 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:20.937426 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:20.937437 | orchestrator | 2025-06-22 12:17:20.937448 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-06-22 12:17:20.937459 | orchestrator | Sunday 22 June 2025 12:15:57 +0000 (0:00:04.014) 0:01:06.056 *********** 2025-06-22 12:17:20.937469 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:20.937480 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:20.937491 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:20.937501 | orchestrator | 2025-06-22 12:17:20.937512 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-06-22 12:17:20.937523 | orchestrator | Sunday 22 June 2025 12:16:02 +0000 (0:00:05.079) 0:01:11.135 *********** 2025-06-22 12:17:20.937534 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:20.937544 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:20.937555 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:20.937566 | orchestrator | 2025-06-22 12:17:20.937577 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-06-22 12:17:20.937587 | orchestrator | Sunday 22 June 2025 12:16:06 +0000 (0:00:03.466) 0:01:14.601 *********** 2025-06-22 12:17:20.937598 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:20.937609 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:20.937620 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:20.937631 | orchestrator | 2025-06-22 12:17:20.937641 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-06-22 12:17:20.937712 | orchestrator | Sunday 22 June 2025 12:16:09 +0000 (0:00:03.150) 0:01:17.752 *********** 2025-06-22 12:17:20.937724 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:20.937735 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:20.937746 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:20.937756 | orchestrator | 2025-06-22 12:17:20.937768 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-06-22 12:17:20.937779 | orchestrator | Sunday 22 June 2025 12:16:09 +0000 (0:00:00.268) 0:01:18.020 *********** 2025-06-22 12:17:20.937789 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-22 12:17:20.937800 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:20.937811 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-22 12:17:20.937822 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:20.937833 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-22 12:17:20.937844 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:20.937864 | orchestrator | 2025-06-22 12:17:20.937875 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-06-22 12:17:20.937886 | orchestrator | Sunday 22 June 2025 12:16:13 +0000 (0:00:03.354) 0:01:21.375 *********** 2025-06-22 12:17:20.937898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 12:17:20.937962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 12:17:20.937977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 12:17:20.937997 | orchestrator | 2025-06-22 12:17:20.938008 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-22 12:17:20.938084 | orchestrator | Sunday 22 June 2025 12:16:16 +0000 (0:00:03.224) 0:01:24.599 *********** 2025-06-22 12:17:20.938100 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:20.938111 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:20.938122 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:20.938132 | orchestrator | 2025-06-22 12:17:20.938142 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-06-22 12:17:20.938153 | orchestrator | Sunday 22 June 2025 12:16:16 +0000 (0:00:00.257) 0:01:24.857 *********** 2025-06-22 12:17:20.938163 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:17:20.938173 | orchestrator | 2025-06-22 12:17:20.938183 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-06-22 12:17:20.938194 | orchestrator | Sunday 22 June 2025 12:16:18 +0000 (0:00:02.027) 0:01:26.885 *********** 2025-06-22 12:17:20.938204 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:17:20.938214 | orchestrator | 2025-06-22 12:17:20.938224 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-06-22 12:17:20.938234 | orchestrator | Sunday 22 June 2025 12:16:20 +0000 (0:00:02.224) 0:01:29.110 *********** 2025-06-22 12:17:20.938244 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:17:20.938254 | orchestrator | 2025-06-22 12:17:20.938265 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-06-22 12:17:20.938282 | orchestrator | Sunday 22 June 2025 12:16:22 +0000 (0:00:02.159) 0:01:31.269 *********** 2025-06-22 12:17:20.938292 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:17:20.938303 | orchestrator | 2025-06-22 12:17:20.938313 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-06-22 12:17:20.938323 | orchestrator | Sunday 22 June 2025 12:16:48 +0000 (0:00:25.805) 0:01:57.075 *********** 2025-06-22 12:17:20.938333 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:17:20.938343 | orchestrator | 2025-06-22 12:17:20.938353 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-22 12:17:20.938363 | orchestrator | Sunday 22 June 2025 12:16:51 +0000 (0:00:02.837) 0:01:59.913 *********** 2025-06-22 12:17:20.938374 | orchestrator | 2025-06-22 12:17:20.938384 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-22 12:17:20.938394 | orchestrator | Sunday 22 June 2025 12:16:51 +0000 (0:00:00.062) 0:01:59.976 *********** 2025-06-22 12:17:20.938404 | orchestrator | 2025-06-22 12:17:20.938414 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-22 12:17:20.938430 | orchestrator | Sunday 22 June 2025 12:16:51 +0000 (0:00:00.062) 0:02:00.038 *********** 2025-06-22 12:17:20.938440 | orchestrator | 2025-06-22 12:17:20.938450 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-06-22 12:17:20.938459 | orchestrator | Sunday 22 June 2025 12:16:51 +0000 (0:00:00.063) 0:02:00.102 *********** 2025-06-22 12:17:20.938469 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:17:20.938478 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:17:20.938488 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:17:20.938498 | orchestrator | 2025-06-22 12:17:20.938507 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:17:20.938518 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-22 12:17:20.938530 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 12:17:20.938540 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 12:17:20.938549 | orchestrator | 2025-06-22 12:17:20.938559 | orchestrator | 2025-06-22 12:17:20.938569 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:17:20.938578 | orchestrator | Sunday 22 June 2025 12:17:19 +0000 (0:00:27.826) 0:02:27.928 *********** 2025-06-22 12:17:20.938588 | orchestrator | =============================================================================== 2025-06-22 12:17:20.938598 | orchestrator | glance : Restart glance-api container ---------------------------------- 27.83s 2025-06-22 12:17:20.938607 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.81s 2025-06-22 12:17:20.938617 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.76s 2025-06-22 12:17:20.938627 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.77s 2025-06-22 12:17:20.938636 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.08s 2025-06-22 12:17:20.938663 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.37s 2025-06-22 12:17:20.938673 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.01s 2025-06-22 12:17:20.938683 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.91s 2025-06-22 12:17:20.938692 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.90s 2025-06-22 12:17:20.938702 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.72s 2025-06-22 12:17:20.938712 | orchestrator | glance : Copying over config.json files for services -------------------- 3.60s 2025-06-22 12:17:20.938721 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.47s 2025-06-22 12:17:20.938731 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.41s 2025-06-22 12:17:20.938741 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.35s 2025-06-22 12:17:20.938750 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.35s 2025-06-22 12:17:20.938760 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.28s 2025-06-22 12:17:20.938773 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.27s 2025-06-22 12:17:20.938783 | orchestrator | glance : Check glance containers ---------------------------------------- 3.22s 2025-06-22 12:17:20.938793 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.16s 2025-06-22 12:17:20.938802 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.15s 2025-06-22 12:17:20.938812 | orchestrator | 2025-06-22 12:17:20 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:17:20.938822 | orchestrator | 2025-06-22 12:17:20 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:17:20.938837 | orchestrator | 2025-06-22 12:17:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:17:23.984094 | orchestrator | 2025-06-22 12:17:23 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:17:23.987566 | orchestrator | 2025-06-22 12:17:23 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:17:23.989531 | orchestrator | 2025-06-22 12:17:23 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:17:23.992993 | orchestrator | 2025-06-22 12:17:23 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:17:23.993034 | orchestrator | 2025-06-22 12:17:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:17:27.043473 | orchestrator | 2025-06-22 12:17:27 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:17:27.043584 | orchestrator | 2025-06-22 12:17:27 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:17:27.046125 | orchestrator | 2025-06-22 12:17:27 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:17:27.048158 | orchestrator | 2025-06-22 12:17:27 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:17:27.048278 | orchestrator | 2025-06-22 12:17:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:17:30.095118 | orchestrator | 2025-06-22 12:17:30 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:17:30.095733 | orchestrator | 2025-06-22 12:17:30 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:17:30.097923 | orchestrator | 2025-06-22 12:17:30 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:17:30.098199 | orchestrator | 2025-06-22 12:17:30 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:17:30.098229 | orchestrator | 2025-06-22 12:17:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:17:33.143813 | orchestrator | 2025-06-22 12:17:33 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:17:33.145052 | orchestrator | 2025-06-22 12:17:33 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:17:33.147313 | orchestrator | 2025-06-22 12:17:33 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:17:33.148327 | orchestrator | 2025-06-22 12:17:33 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:17:33.148542 | orchestrator | 2025-06-22 12:17:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:17:36.183944 | orchestrator | 2025-06-22 12:17:36 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:17:36.186660 | orchestrator | 2025-06-22 12:17:36 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:17:36.187473 | orchestrator | 2025-06-22 12:17:36 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:17:36.189129 | orchestrator | 2025-06-22 12:17:36 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:17:36.189171 | orchestrator | 2025-06-22 12:17:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:17:39.248411 | orchestrator | 2025-06-22 12:17:39 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:17:39.249887 | orchestrator | 2025-06-22 12:17:39 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:17:39.250842 | orchestrator | 2025-06-22 12:17:39 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:17:39.251605 | orchestrator | 2025-06-22 12:17:39 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:17:39.251669 | orchestrator | 2025-06-22 12:17:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:17:42.300875 | orchestrator | 2025-06-22 12:17:42 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:17:42.303546 | orchestrator | 2025-06-22 12:17:42 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:17:42.306307 | orchestrator | 2025-06-22 12:17:42 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:17:42.309286 | orchestrator | 2025-06-22 12:17:42 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:17:42.309312 | orchestrator | 2025-06-22 12:17:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:17:45.366391 | orchestrator | 2025-06-22 12:17:45 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:17:45.366493 | orchestrator | 2025-06-22 12:17:45 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:17:45.367562 | orchestrator | 2025-06-22 12:17:45 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:17:45.368855 | orchestrator | 2025-06-22 12:17:45 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state STARTED 2025-06-22 12:17:45.369058 | orchestrator | 2025-06-22 12:17:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:17:48.427367 | orchestrator | 2025-06-22 12:17:48 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:17:48.430783 | orchestrator | 2025-06-22 12:17:48 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:17:48.430824 | orchestrator | 2025-06-22 12:17:48 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:17:48.432050 | orchestrator | 2025-06-22 12:17:48 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:17:48.434471 | orchestrator | 2025-06-22 12:17:48 | INFO  | Task 341d523d-d73b-4ff6-99dd-39905e240810 is in state SUCCESS 2025-06-22 12:17:48.434746 | orchestrator | 2025-06-22 12:17:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:17:48.436410 | orchestrator | 2025-06-22 12:17:48.436440 | orchestrator | 2025-06-22 12:17:48.436453 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:17:48.436464 | orchestrator | 2025-06-22 12:17:48.436476 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:17:48.436487 | orchestrator | Sunday 22 June 2025 12:14:57 +0000 (0:00:00.189) 0:00:00.189 *********** 2025-06-22 12:17:48.436499 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:17:48.436512 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:17:48.436523 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:17:48.436535 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:17:48.436546 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:17:48.436557 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:17:48.436568 | orchestrator | 2025-06-22 12:17:48.436580 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:17:48.436591 | orchestrator | Sunday 22 June 2025 12:14:58 +0000 (0:00:00.513) 0:00:00.703 *********** 2025-06-22 12:17:48.436603 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-06-22 12:17:48.436614 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-06-22 12:17:48.436657 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-06-22 12:17:48.436668 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-06-22 12:17:48.436702 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-06-22 12:17:48.436713 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-06-22 12:17:48.436724 | orchestrator | 2025-06-22 12:17:48.436735 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-06-22 12:17:48.436746 | orchestrator | 2025-06-22 12:17:48.436757 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 12:17:48.436767 | orchestrator | Sunday 22 June 2025 12:14:58 +0000 (0:00:00.562) 0:00:01.265 *********** 2025-06-22 12:17:48.436779 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:17:48.436792 | orchestrator | 2025-06-22 12:17:48.436803 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-06-22 12:17:48.436814 | orchestrator | Sunday 22 June 2025 12:14:59 +0000 (0:00:00.968) 0:00:02.233 *********** 2025-06-22 12:17:48.436825 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-06-22 12:17:48.436836 | orchestrator | 2025-06-22 12:17:48.436847 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-06-22 12:17:48.436857 | orchestrator | Sunday 22 June 2025 12:15:03 +0000 (0:00:03.602) 0:00:05.836 *********** 2025-06-22 12:17:48.436868 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-06-22 12:17:48.436879 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-06-22 12:17:48.436890 | orchestrator | 2025-06-22 12:17:48.437378 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-06-22 12:17:48.437389 | orchestrator | Sunday 22 June 2025 12:15:10 +0000 (0:00:06.631) 0:00:12.467 *********** 2025-06-22 12:17:48.437401 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 12:17:48.437411 | orchestrator | 2025-06-22 12:17:48.437436 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-06-22 12:17:48.437448 | orchestrator | Sunday 22 June 2025 12:15:13 +0000 (0:00:03.357) 0:00:15.825 *********** 2025-06-22 12:17:48.437458 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 12:17:48.437469 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-06-22 12:17:48.437480 | orchestrator | 2025-06-22 12:17:48.437490 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-06-22 12:17:48.437501 | orchestrator | Sunday 22 June 2025 12:15:17 +0000 (0:00:03.923) 0:00:19.749 *********** 2025-06-22 12:17:48.437512 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 12:17:48.437522 | orchestrator | 2025-06-22 12:17:48.437533 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-06-22 12:17:48.437543 | orchestrator | Sunday 22 June 2025 12:15:21 +0000 (0:00:04.054) 0:00:23.803 *********** 2025-06-22 12:17:48.437554 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-06-22 12:17:48.437565 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-06-22 12:17:48.437575 | orchestrator | 2025-06-22 12:17:48.437586 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-06-22 12:17:48.437596 | orchestrator | Sunday 22 June 2025 12:15:29 +0000 (0:00:08.015) 0:00:31.818 *********** 2025-06-22 12:17:48.437658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 12:17:48.437686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 12:17:48.437698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 12:17:48.437716 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.437728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.437750 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.437768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.437780 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.437791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.437808 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.437823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.437942 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.437960 | orchestrator | 2025-06-22 12:17:48.437971 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 12:17:48.437982 | orchestrator | Sunday 22 June 2025 12:15:31 +0000 (0:00:01.760) 0:00:33.579 *********** 2025-06-22 12:17:48.437996 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:48.438009 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:48.438361 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:48.438376 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:17:48.438387 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:17:48.438409 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:17:48.438420 | orchestrator | 2025-06-22 12:17:48.438431 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 12:17:48.438442 | orchestrator | Sunday 22 June 2025 12:15:31 +0000 (0:00:00.663) 0:00:34.243 *********** 2025-06-22 12:17:48.438452 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:48.438463 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:48.438473 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:48.438484 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:17:48.438495 | orchestrator | 2025-06-22 12:17:48.438506 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-06-22 12:17:48.438517 | orchestrator | Sunday 22 June 2025 12:15:33 +0000 (0:00:01.164) 0:00:35.407 *********** 2025-06-22 12:17:48.438527 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-06-22 12:17:48.438539 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-06-22 12:17:48.438550 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-06-22 12:17:48.438560 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-06-22 12:17:48.438571 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-06-22 12:17:48.438582 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-06-22 12:17:48.438592 | orchestrator | 2025-06-22 12:17:48.438603 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-06-22 12:17:48.438614 | orchestrator | Sunday 22 June 2025 12:15:34 +0000 (0:00:01.941) 0:00:37.349 *********** 2025-06-22 12:17:48.438655 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 12:17:48.438669 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 12:17:48.438724 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 12:17:48.438738 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 12:17:48.438750 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 12:17:48.438766 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 12:17:48.438778 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 12:17:48.438822 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 12:17:48.438836 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 12:17:48.438847 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 12:17:48.438864 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 12:17:48.438884 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 12:17:48.438895 | orchestrator | 2025-06-22 12:17:48.438906 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-06-22 12:17:48.438917 | orchestrator | Sunday 22 June 2025 12:15:38 +0000 (0:00:03.151) 0:00:40.500 *********** 2025-06-22 12:17:48.438928 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 12:17:48.438940 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 12:17:48.438951 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 12:17:48.438962 | orchestrator | 2025-06-22 12:17:48.438974 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-06-22 12:17:48.438987 | orchestrator | Sunday 22 June 2025 12:15:40 +0000 (0:00:01.890) 0:00:42.390 *********** 2025-06-22 12:17:48.439025 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-06-22 12:17:48.439039 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-06-22 12:17:48.439051 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-06-22 12:17:48.439063 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 12:17:48.439075 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 12:17:48.439087 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 12:17:48.439100 | orchestrator | 2025-06-22 12:17:48.439112 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-06-22 12:17:48.439124 | orchestrator | Sunday 22 June 2025 12:15:42 +0000 (0:00:02.707) 0:00:45.098 *********** 2025-06-22 12:17:48.439137 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-06-22 12:17:48.439149 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-06-22 12:17:48.439163 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-06-22 12:17:48.439177 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-06-22 12:17:48.439189 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-06-22 12:17:48.439201 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-06-22 12:17:48.439212 | orchestrator | 2025-06-22 12:17:48.439225 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-06-22 12:17:48.439237 | orchestrator | Sunday 22 June 2025 12:15:43 +0000 (0:00:01.164) 0:00:46.263 *********** 2025-06-22 12:17:48.439250 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:48.439262 | orchestrator | 2025-06-22 12:17:48.439275 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-06-22 12:17:48.439287 | orchestrator | Sunday 22 June 2025 12:15:43 +0000 (0:00:00.105) 0:00:46.369 *********** 2025-06-22 12:17:48.439299 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:48.439312 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:48.439332 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:48.439343 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:17:48.439354 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:17:48.439365 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:17:48.439376 | orchestrator | 2025-06-22 12:17:48.439387 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 12:17:48.439398 | orchestrator | Sunday 22 June 2025 12:15:44 +0000 (0:00:00.670) 0:00:47.039 *********** 2025-06-22 12:17:48.439410 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:17:48.439422 | orchestrator | 2025-06-22 12:17:48.439433 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-06-22 12:17:48.439444 | orchestrator | Sunday 22 June 2025 12:15:45 +0000 (0:00:01.163) 0:00:48.203 *********** 2025-06-22 12:17:48.439466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 12:17:48.439479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 12:17:48.439519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 12:17:48.439533 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.439557 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.439569 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.439581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.439676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.439692 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.439713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.439729 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.439741 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.439752 | orchestrator | 2025-06-22 12:17:48.439763 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-06-22 12:17:48.439774 | orchestrator | Sunday 22 June 2025 12:15:48 +0000 (0:00:02.978) 0:00:51.181 *********** 2025-06-22 12:17:48.439793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 12:17:48.439805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.439825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 12:17:48.439837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.439853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 12:17:48.439865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.439877 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:48.439896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.439915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.439927 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:48.439938 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:48.439949 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:17:48.439960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.439977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.439988 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:17:48.439999 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.440019 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.440036 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:17:48.440047 | orchestrator | 2025-06-22 12:17:48.440058 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-06-22 12:17:48.440069 | orchestrator | Sunday 22 June 2025 12:15:50 +0000 (0:00:01.241) 0:00:52.423 *********** 2025-06-22 12:17:48.440080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 12:17:48.440092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.440183 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:48.440204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 12:17:48.440215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.440225 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:48.440242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 12:17:48.440260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.440270 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:48.440280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.440294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.440305 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:17:48.440315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.440331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.440348 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:17:48.440358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.440368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.440378 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:17:48.440388 | orchestrator | 2025-06-22 12:17:48.440397 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-06-22 12:17:48.440407 | orchestrator | Sunday 22 June 2025 12:15:51 +0000 (0:00:01.763) 0:00:54.187 *********** 2025-06-22 12:17:48.440421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 12:17:48.440432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 12:17:48.440458 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.440469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 12:17:48.440483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.440494 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.440510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.440525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.440536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.440546 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.440560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.440570 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.440585 | orchestrator | 2025-06-22 12:17:48.440595 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-06-22 12:17:48.440638 | orchestrator | Sunday 22 June 2025 12:15:54 +0000 (0:00:02.814) 0:00:57.002 *********** 2025-06-22 12:17:48.440650 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-22 12:17:48.440659 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:17:48.440669 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-22 12:17:48.440678 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:17:48.440688 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-22 12:17:48.440698 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-22 12:17:48.440707 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:17:48.440717 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-22 12:17:48.440732 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-22 12:17:48.440742 | orchestrator | 2025-06-22 12:17:48.440752 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-06-22 12:17:48.440761 | orchestrator | Sunday 22 June 2025 12:15:56 +0000 (0:00:02.246) 0:00:59.248 *********** 2025-06-22 12:17:48.440771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 12:17:48.440781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 12:17:48.440797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 12:17:48.440815 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.440831 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.440841 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.440851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.440866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.440882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.440892 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.440908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.440918 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.440928 | orchestrator | 2025-06-22 12:17:48.440938 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-06-22 12:17:48.440947 | orchestrator | Sunday 22 June 2025 12:16:05 +0000 (0:00:08.591) 0:01:07.840 *********** 2025-06-22 12:17:48.440957 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:48.440966 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:48.440976 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:48.440986 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:17:48.440995 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:17:48.441004 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:17:48.441013 | orchestrator | 2025-06-22 12:17:48.441023 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-06-22 12:17:48.441032 | orchestrator | Sunday 22 June 2025 12:16:07 +0000 (0:00:02.188) 0:01:10.029 *********** 2025-06-22 12:17:48.441046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 12:17:48.441062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.441073 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:48.441087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 12:17:48.441098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.441108 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:48.441118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 12:17:48.441133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.441149 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:48.441159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.441169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.441179 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:17:48.441194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.441205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.441215 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:17:48.441229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.441245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 12:17:48.441256 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:17:48.441265 | orchestrator | 2025-06-22 12:17:48.441275 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-06-22 12:17:48.441284 | orchestrator | Sunday 22 June 2025 12:16:09 +0000 (0:00:01.534) 0:01:11.564 *********** 2025-06-22 12:17:48.441294 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:48.441303 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:48.441312 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:48.441322 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:17:48.441331 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:17:48.441341 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:17:48.441350 | orchestrator | 2025-06-22 12:17:48.441360 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-06-22 12:17:48.441369 | orchestrator | Sunday 22 June 2025 12:16:09 +0000 (0:00:00.675) 0:01:12.239 *********** 2025-06-22 12:17:48.441385 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.441395 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.441414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 12:17:48.441429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 12:17:48.441440 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.441455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 12:17:48.441466 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.441482 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.441496 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.441506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.441521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.441531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:17:48.441541 | orchestrator | 2025-06-22 12:17:48.441551 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 12:17:48.441570 | orchestrator | Sunday 22 June 2025 12:16:12 +0000 (0:00:02.460) 0:01:14.700 *********** 2025-06-22 12:17:48.441580 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:48.441590 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:17:48.441600 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:17:48.441609 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:17:48.441667 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:17:48.441678 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:17:48.441688 | orchestrator | 2025-06-22 12:17:48.441697 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-06-22 12:17:48.441707 | orchestrator | Sunday 22 June 2025 12:16:13 +0000 (0:00:00.699) 0:01:15.399 *********** 2025-06-22 12:17:48.441716 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:17:48.441726 | orchestrator | 2025-06-22 12:17:48.441735 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-06-22 12:17:48.441745 | orchestrator | Sunday 22 June 2025 12:16:15 +0000 (0:00:02.247) 0:01:17.646 *********** 2025-06-22 12:17:48.441754 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:17:48.441764 | orchestrator | 2025-06-22 12:17:48.441773 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-06-22 12:17:48.441783 | orchestrator | Sunday 22 June 2025 12:16:17 +0000 (0:00:02.247) 0:01:19.894 *********** 2025-06-22 12:17:48.441792 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:17:48.441801 | orchestrator | 2025-06-22 12:17:48.441811 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 12:17:48.441820 | orchestrator | Sunday 22 June 2025 12:16:35 +0000 (0:00:18.097) 0:01:37.991 *********** 2025-06-22 12:17:48.441830 | orchestrator | 2025-06-22 12:17:48.441839 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 12:17:48.441849 | orchestrator | Sunday 22 June 2025 12:16:35 +0000 (0:00:00.062) 0:01:38.054 *********** 2025-06-22 12:17:48.441858 | orchestrator | 2025-06-22 12:17:48.441867 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 12:17:48.441877 | orchestrator | Sunday 22 June 2025 12:16:35 +0000 (0:00:00.061) 0:01:38.116 *********** 2025-06-22 12:17:48.441886 | orchestrator | 2025-06-22 12:17:48.441896 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 12:17:48.441910 | orchestrator | Sunday 22 June 2025 12:16:35 +0000 (0:00:00.064) 0:01:38.180 *********** 2025-06-22 12:17:48.441920 | orchestrator | 2025-06-22 12:17:48.441929 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 12:17:48.441939 | orchestrator | Sunday 22 June 2025 12:16:35 +0000 (0:00:00.063) 0:01:38.244 *********** 2025-06-22 12:17:48.441948 | orchestrator | 2025-06-22 12:17:48.441958 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 12:17:48.441967 | orchestrator | Sunday 22 June 2025 12:16:35 +0000 (0:00:00.063) 0:01:38.307 *********** 2025-06-22 12:17:48.441976 | orchestrator | 2025-06-22 12:17:48.441986 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-06-22 12:17:48.441995 | orchestrator | Sunday 22 June 2025 12:16:35 +0000 (0:00:00.059) 0:01:38.367 *********** 2025-06-22 12:17:48.442005 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:17:48.442040 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:17:48.442052 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:17:48.442062 | orchestrator | 2025-06-22 12:17:48.442072 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-06-22 12:17:48.442081 | orchestrator | Sunday 22 June 2025 12:16:57 +0000 (0:00:21.645) 0:02:00.012 *********** 2025-06-22 12:17:48.442090 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:17:48.442100 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:17:48.442109 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:17:48.442119 | orchestrator | 2025-06-22 12:17:48.442128 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-06-22 12:17:48.442138 | orchestrator | Sunday 22 June 2025 12:17:03 +0000 (0:00:06.325) 0:02:06.338 *********** 2025-06-22 12:17:48.442155 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:17:48.442164 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:17:48.442173 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:17:48.442183 | orchestrator | 2025-06-22 12:17:48.442191 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-06-22 12:17:48.442199 | orchestrator | Sunday 22 June 2025 12:17:33 +0000 (0:00:29.955) 0:02:36.293 *********** 2025-06-22 12:17:48.442207 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:17:48.442214 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:17:48.442222 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:17:48.442230 | orchestrator | 2025-06-22 12:17:48.442238 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-06-22 12:17:48.442245 | orchestrator | Sunday 22 June 2025 12:17:44 +0000 (0:00:10.549) 0:02:46.842 *********** 2025-06-22 12:17:48.442253 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:17:48.442261 | orchestrator | 2025-06-22 12:17:48.442268 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:17:48.442281 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 12:17:48.442290 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 12:17:48.442298 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 12:17:48.442306 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 12:17:48.442314 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 12:17:48.442322 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 12:17:48.442329 | orchestrator | 2025-06-22 12:17:48.442337 | orchestrator | 2025-06-22 12:17:48.442345 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:17:48.442353 | orchestrator | Sunday 22 June 2025 12:17:45 +0000 (0:00:00.630) 0:02:47.473 *********** 2025-06-22 12:17:48.442360 | orchestrator | =============================================================================== 2025-06-22 12:17:48.442368 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 29.96s 2025-06-22 12:17:48.442376 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 21.65s 2025-06-22 12:17:48.442384 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.10s 2025-06-22 12:17:48.442392 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.55s 2025-06-22 12:17:48.442399 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 8.59s 2025-06-22 12:17:48.442407 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.02s 2025-06-22 12:17:48.442415 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.63s 2025-06-22 12:17:48.442423 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.33s 2025-06-22 12:17:48.442430 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 4.05s 2025-06-22 12:17:48.442438 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.92s 2025-06-22 12:17:48.442446 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.60s 2025-06-22 12:17:48.442454 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.36s 2025-06-22 12:17:48.442461 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.15s 2025-06-22 12:17:48.442469 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 2.98s 2025-06-22 12:17:48.442486 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.81s 2025-06-22 12:17:48.442494 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.71s 2025-06-22 12:17:48.442502 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.46s 2025-06-22 12:17:48.442509 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.25s 2025-06-22 12:17:48.442517 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.25s 2025-06-22 12:17:48.442525 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.25s 2025-06-22 12:17:51.487591 | orchestrator | 2025-06-22 12:17:51 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:17:51.491429 | orchestrator | 2025-06-22 12:17:51 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:17:51.493598 | orchestrator | 2025-06-22 12:17:51 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:17:51.496798 | orchestrator | 2025-06-22 12:17:51 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:17:51.496892 | orchestrator | 2025-06-22 12:17:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:17:54.543904 | orchestrator | 2025-06-22 12:17:54 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:17:54.545544 | orchestrator | 2025-06-22 12:17:54 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:17:54.546942 | orchestrator | 2025-06-22 12:17:54 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:17:54.548737 | orchestrator | 2025-06-22 12:17:54 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:17:54.548787 | orchestrator | 2025-06-22 12:17:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:17:57.580492 | orchestrator | 2025-06-22 12:17:57 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:17:57.581190 | orchestrator | 2025-06-22 12:17:57 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:17:57.582447 | orchestrator | 2025-06-22 12:17:57 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:17:57.583510 | orchestrator | 2025-06-22 12:17:57 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:17:57.583533 | orchestrator | 2025-06-22 12:17:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:00.640868 | orchestrator | 2025-06-22 12:18:00 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:00.642408 | orchestrator | 2025-06-22 12:18:00 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:00.643444 | orchestrator | 2025-06-22 12:18:00 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:18:00.644705 | orchestrator | 2025-06-22 12:18:00 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:00.644965 | orchestrator | 2025-06-22 12:18:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:03.701391 | orchestrator | 2025-06-22 12:18:03 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:03.702302 | orchestrator | 2025-06-22 12:18:03 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:03.703374 | orchestrator | 2025-06-22 12:18:03 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:18:03.704545 | orchestrator | 2025-06-22 12:18:03 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:03.704838 | orchestrator | 2025-06-22 12:18:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:06.757706 | orchestrator | 2025-06-22 12:18:06 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:06.760210 | orchestrator | 2025-06-22 12:18:06 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:06.760253 | orchestrator | 2025-06-22 12:18:06 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:18:06.761332 | orchestrator | 2025-06-22 12:18:06 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:06.761353 | orchestrator | 2025-06-22 12:18:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:09.822875 | orchestrator | 2025-06-22 12:18:09 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:09.824582 | orchestrator | 2025-06-22 12:18:09 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:09.826423 | orchestrator | 2025-06-22 12:18:09 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:18:09.830010 | orchestrator | 2025-06-22 12:18:09 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:09.830456 | orchestrator | 2025-06-22 12:18:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:12.882357 | orchestrator | 2025-06-22 12:18:12 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:12.883179 | orchestrator | 2025-06-22 12:18:12 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:12.885577 | orchestrator | 2025-06-22 12:18:12 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:18:12.887332 | orchestrator | 2025-06-22 12:18:12 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:12.887372 | orchestrator | 2025-06-22 12:18:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:15.952999 | orchestrator | 2025-06-22 12:18:15 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:15.954349 | orchestrator | 2025-06-22 12:18:15 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:15.956805 | orchestrator | 2025-06-22 12:18:15 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:18:15.959145 | orchestrator | 2025-06-22 12:18:15 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:15.959244 | orchestrator | 2025-06-22 12:18:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:19.006415 | orchestrator | 2025-06-22 12:18:19 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:19.008225 | orchestrator | 2025-06-22 12:18:19 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:19.010232 | orchestrator | 2025-06-22 12:18:19 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:18:19.012198 | orchestrator | 2025-06-22 12:18:19 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:19.012230 | orchestrator | 2025-06-22 12:18:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:22.065516 | orchestrator | 2025-06-22 12:18:22 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:22.067445 | orchestrator | 2025-06-22 12:18:22 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:22.069953 | orchestrator | 2025-06-22 12:18:22 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state STARTED 2025-06-22 12:18:22.072493 | orchestrator | 2025-06-22 12:18:22 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:22.072545 | orchestrator | 2025-06-22 12:18:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:25.117874 | orchestrator | 2025-06-22 12:18:25 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:25.120837 | orchestrator | 2025-06-22 12:18:25 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:25.122166 | orchestrator | 2025-06-22 12:18:25 | INFO  | Task 80d1467d-5a98-4cfb-a350-5c826c0883e9 is in state SUCCESS 2025-06-22 12:18:25.123845 | orchestrator | 2025-06-22 12:18:25 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:25.123886 | orchestrator | 2025-06-22 12:18:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:28.171975 | orchestrator | 2025-06-22 12:18:28 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:28.173728 | orchestrator | 2025-06-22 12:18:28 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:28.175015 | orchestrator | 2025-06-22 12:18:28 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:28.176517 | orchestrator | 2025-06-22 12:18:28 | INFO  | Task 6d615a8f-823c-499d-ab00-2084d946665e is in state STARTED 2025-06-22 12:18:28.176699 | orchestrator | 2025-06-22 12:18:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:31.216352 | orchestrator | 2025-06-22 12:18:31 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:31.218222 | orchestrator | 2025-06-22 12:18:31 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:31.219720 | orchestrator | 2025-06-22 12:18:31 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:31.221283 | orchestrator | 2025-06-22 12:18:31 | INFO  | Task 6d615a8f-823c-499d-ab00-2084d946665e is in state STARTED 2025-06-22 12:18:31.221307 | orchestrator | 2025-06-22 12:18:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:34.271172 | orchestrator | 2025-06-22 12:18:34 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:34.273945 | orchestrator | 2025-06-22 12:18:34 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:34.276216 | orchestrator | 2025-06-22 12:18:34 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:34.278394 | orchestrator | 2025-06-22 12:18:34 | INFO  | Task 6d615a8f-823c-499d-ab00-2084d946665e is in state STARTED 2025-06-22 12:18:34.278774 | orchestrator | 2025-06-22 12:18:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:37.324929 | orchestrator | 2025-06-22 12:18:37 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:37.325044 | orchestrator | 2025-06-22 12:18:37 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:37.326247 | orchestrator | 2025-06-22 12:18:37 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:37.327273 | orchestrator | 2025-06-22 12:18:37 | INFO  | Task 6d615a8f-823c-499d-ab00-2084d946665e is in state STARTED 2025-06-22 12:18:37.327680 | orchestrator | 2025-06-22 12:18:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:40.387010 | orchestrator | 2025-06-22 12:18:40 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:40.390308 | orchestrator | 2025-06-22 12:18:40 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:40.393281 | orchestrator | 2025-06-22 12:18:40 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:40.396312 | orchestrator | 2025-06-22 12:18:40 | INFO  | Task 6d615a8f-823c-499d-ab00-2084d946665e is in state STARTED 2025-06-22 12:18:40.396438 | orchestrator | 2025-06-22 12:18:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:43.436782 | orchestrator | 2025-06-22 12:18:43 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:43.439208 | orchestrator | 2025-06-22 12:18:43 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:43.442680 | orchestrator | 2025-06-22 12:18:43 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:43.450095 | orchestrator | 2025-06-22 12:18:43 | INFO  | Task 6d615a8f-823c-499d-ab00-2084d946665e is in state STARTED 2025-06-22 12:18:43.450126 | orchestrator | 2025-06-22 12:18:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:46.496252 | orchestrator | 2025-06-22 12:18:46 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:46.496363 | orchestrator | 2025-06-22 12:18:46 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:46.497731 | orchestrator | 2025-06-22 12:18:46 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:46.499385 | orchestrator | 2025-06-22 12:18:46 | INFO  | Task 6d615a8f-823c-499d-ab00-2084d946665e is in state STARTED 2025-06-22 12:18:46.499696 | orchestrator | 2025-06-22 12:18:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:49.551082 | orchestrator | 2025-06-22 12:18:49 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:49.552758 | orchestrator | 2025-06-22 12:18:49 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:49.554641 | orchestrator | 2025-06-22 12:18:49 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:49.556775 | orchestrator | 2025-06-22 12:18:49 | INFO  | Task 6d615a8f-823c-499d-ab00-2084d946665e is in state STARTED 2025-06-22 12:18:49.556800 | orchestrator | 2025-06-22 12:18:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:52.603402 | orchestrator | 2025-06-22 12:18:52 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:52.605616 | orchestrator | 2025-06-22 12:18:52 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:52.607828 | orchestrator | 2025-06-22 12:18:52 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:52.609629 | orchestrator | 2025-06-22 12:18:52 | INFO  | Task 6d615a8f-823c-499d-ab00-2084d946665e is in state STARTED 2025-06-22 12:18:52.609653 | orchestrator | 2025-06-22 12:18:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:55.651707 | orchestrator | 2025-06-22 12:18:55 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:55.654196 | orchestrator | 2025-06-22 12:18:55 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:55.655535 | orchestrator | 2025-06-22 12:18:55 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:55.657232 | orchestrator | 2025-06-22 12:18:55 | INFO  | Task 6d615a8f-823c-499d-ab00-2084d946665e is in state STARTED 2025-06-22 12:18:55.657447 | orchestrator | 2025-06-22 12:18:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:18:58.711139 | orchestrator | 2025-06-22 12:18:58 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:18:58.713489 | orchestrator | 2025-06-22 12:18:58 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:18:58.715840 | orchestrator | 2025-06-22 12:18:58 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:18:58.717656 | orchestrator | 2025-06-22 12:18:58 | INFO  | Task 6d615a8f-823c-499d-ab00-2084d946665e is in state STARTED 2025-06-22 12:18:58.717686 | orchestrator | 2025-06-22 12:18:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:01.760968 | orchestrator | 2025-06-22 12:19:01 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:19:01.762891 | orchestrator | 2025-06-22 12:19:01 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:01.766791 | orchestrator | 2025-06-22 12:19:01 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:19:01.769777 | orchestrator | 2025-06-22 12:19:01 | INFO  | Task 6d615a8f-823c-499d-ab00-2084d946665e is in state STARTED 2025-06-22 12:19:01.769825 | orchestrator | 2025-06-22 12:19:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:04.813262 | orchestrator | 2025-06-22 12:19:04 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:19:04.815496 | orchestrator | 2025-06-22 12:19:04 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:04.816656 | orchestrator | 2025-06-22 12:19:04 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:19:04.818944 | orchestrator | 2025-06-22 12:19:04 | INFO  | Task 6d615a8f-823c-499d-ab00-2084d946665e is in state STARTED 2025-06-22 12:19:04.819034 | orchestrator | 2025-06-22 12:19:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:07.862803 | orchestrator | 2025-06-22 12:19:07 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:19:07.864129 | orchestrator | 2025-06-22 12:19:07 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:07.865608 | orchestrator | 2025-06-22 12:19:07 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:19:07.867381 | orchestrator | 2025-06-22 12:19:07 | INFO  | Task 6d615a8f-823c-499d-ab00-2084d946665e is in state STARTED 2025-06-22 12:19:07.867499 | orchestrator | 2025-06-22 12:19:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:10.909239 | orchestrator | 2025-06-22 12:19:10 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:19:10.910229 | orchestrator | 2025-06-22 12:19:10 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:10.911827 | orchestrator | 2025-06-22 12:19:10 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:19:10.913109 | orchestrator | 2025-06-22 12:19:10 | INFO  | Task 6d615a8f-823c-499d-ab00-2084d946665e is in state STARTED 2025-06-22 12:19:10.913131 | orchestrator | 2025-06-22 12:19:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:13.967971 | orchestrator | 2025-06-22 12:19:13 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:19:13.970283 | orchestrator | 2025-06-22 12:19:13 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:13.972802 | orchestrator | 2025-06-22 12:19:13 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:19:13.974655 | orchestrator | 2025-06-22 12:19:13 | INFO  | Task 6d615a8f-823c-499d-ab00-2084d946665e is in state STARTED 2025-06-22 12:19:13.974694 | orchestrator | 2025-06-22 12:19:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:17.020491 | orchestrator | 2025-06-22 12:19:17 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:19:17.021703 | orchestrator | 2025-06-22 12:19:17 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:17.023377 | orchestrator | 2025-06-22 12:19:17 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:19:17.024668 | orchestrator | 2025-06-22 12:19:17 | INFO  | Task 6d615a8f-823c-499d-ab00-2084d946665e is in state STARTED 2025-06-22 12:19:17.024988 | orchestrator | 2025-06-22 12:19:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:20.070628 | orchestrator | 2025-06-22 12:19:20 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:19:20.072239 | orchestrator | 2025-06-22 12:19:20 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:20.072271 | orchestrator | 2025-06-22 12:19:20 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:19:20.073604 | orchestrator | 2025-06-22 12:19:20 | INFO  | Task 6d615a8f-823c-499d-ab00-2084d946665e is in state STARTED 2025-06-22 12:19:20.073626 | orchestrator | 2025-06-22 12:19:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:23.118193 | orchestrator | 2025-06-22 12:19:23 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:19:23.119912 | orchestrator | 2025-06-22 12:19:23 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:23.121028 | orchestrator | 2025-06-22 12:19:23 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:19:23.122254 | orchestrator | 2025-06-22 12:19:23 | INFO  | Task 6d615a8f-823c-499d-ab00-2084d946665e is in state SUCCESS 2025-06-22 12:19:23.122480 | orchestrator | 2025-06-22 12:19:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:23.123117 | orchestrator | 2025-06-22 12:19:23.123145 | orchestrator | 2025-06-22 12:19:23.123157 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-06-22 12:19:23.123169 | orchestrator | 2025-06-22 12:19:23.123180 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-06-22 12:19:23.123192 | orchestrator | Sunday 22 June 2025 12:12:10 +0000 (0:00:00.349) 0:00:00.349 *********** 2025-06-22 12:19:23.123203 | orchestrator | changed: [localhost] 2025-06-22 12:19:23.123214 | orchestrator | 2025-06-22 12:19:23.123226 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-06-22 12:19:23.123237 | orchestrator | Sunday 22 June 2025 12:12:11 +0000 (0:00:01.278) 0:00:01.628 *********** 2025-06-22 12:19:23.123248 | orchestrator | 2025-06-22 12:19:23.123259 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-22 12:19:23.123270 | orchestrator | 2025-06-22 12:19:23.123281 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-22 12:19:23.123292 | orchestrator | 2025-06-22 12:19:23.123302 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-22 12:19:23.123313 | orchestrator | 2025-06-22 12:19:23.123324 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-22 12:19:23.123335 | orchestrator | 2025-06-22 12:19:23.123345 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-22 12:19:23.123356 | orchestrator | 2025-06-22 12:19:23.123367 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-22 12:19:23.123404 | orchestrator | 2025-06-22 12:19:23.123415 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-22 12:19:23.123426 | orchestrator | 2025-06-22 12:19:23.123437 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-22 12:19:23.123448 | orchestrator | changed: [localhost] 2025-06-22 12:19:23.123459 | orchestrator | 2025-06-22 12:19:23.123470 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-06-22 12:19:23.123480 | orchestrator | Sunday 22 June 2025 12:18:09 +0000 (0:05:58.493) 0:06:00.122 *********** 2025-06-22 12:19:23.123491 | orchestrator | changed: [localhost] 2025-06-22 12:19:23.123502 | orchestrator | 2025-06-22 12:19:23.123513 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:19:23.123560 | orchestrator | 2025-06-22 12:19:23.123572 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:19:23.123583 | orchestrator | Sunday 22 June 2025 12:18:23 +0000 (0:00:13.086) 0:06:13.208 *********** 2025-06-22 12:19:23.123593 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:19:23.123605 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:19:23.123615 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:19:23.123626 | orchestrator | 2025-06-22 12:19:23.123637 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:19:23.123648 | orchestrator | Sunday 22 June 2025 12:18:23 +0000 (0:00:00.340) 0:06:13.549 *********** 2025-06-22 12:19:23.123659 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-06-22 12:19:23.123670 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-06-22 12:19:23.123681 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-06-22 12:19:23.123691 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-06-22 12:19:23.123702 | orchestrator | 2025-06-22 12:19:23.123728 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-06-22 12:19:23.123741 | orchestrator | skipping: no hosts matched 2025-06-22 12:19:23.123754 | orchestrator | 2025-06-22 12:19:23.123766 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:19:23.123778 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:19:23.123793 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:19:23.123807 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:19:23.123819 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:19:23.123831 | orchestrator | 2025-06-22 12:19:23.123843 | orchestrator | 2025-06-22 12:19:23.123855 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:19:23.123868 | orchestrator | Sunday 22 June 2025 12:18:23 +0000 (0:00:00.423) 0:06:13.973 *********** 2025-06-22 12:19:23.123880 | orchestrator | =============================================================================== 2025-06-22 12:19:23.123893 | orchestrator | Download ironic-agent initramfs --------------------------------------- 358.49s 2025-06-22 12:19:23.123905 | orchestrator | Download ironic-agent kernel ------------------------------------------- 13.09s 2025-06-22 12:19:23.123917 | orchestrator | Ensure the destination directory exists --------------------------------- 1.28s 2025-06-22 12:19:23.123930 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2025-06-22 12:19:23.123942 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-06-22 12:19:23.123954 | orchestrator | 2025-06-22 12:19:23.123967 | orchestrator | 2025-06-22 12:19:23.123979 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:19:23.123999 | orchestrator | 2025-06-22 12:19:23.124013 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:19:23.124025 | orchestrator | Sunday 22 June 2025 12:18:27 +0000 (0:00:00.194) 0:00:00.194 *********** 2025-06-22 12:19:23.124037 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:19:23.124049 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:19:23.124061 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:19:23.124073 | orchestrator | 2025-06-22 12:19:23.124085 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:19:23.124097 | orchestrator | Sunday 22 June 2025 12:18:27 +0000 (0:00:00.213) 0:00:00.407 *********** 2025-06-22 12:19:23.124107 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-06-22 12:19:23.124130 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-06-22 12:19:23.124142 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-06-22 12:19:23.124153 | orchestrator | 2025-06-22 12:19:23.124164 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-06-22 12:19:23.124175 | orchestrator | 2025-06-22 12:19:23.124185 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-22 12:19:23.124196 | orchestrator | Sunday 22 June 2025 12:18:28 +0000 (0:00:00.337) 0:00:00.745 *********** 2025-06-22 12:19:23.124207 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:19:23.124218 | orchestrator | 2025-06-22 12:19:23.124228 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-06-22 12:19:23.124239 | orchestrator | Sunday 22 June 2025 12:18:28 +0000 (0:00:00.484) 0:00:01.230 *********** 2025-06-22 12:19:23.124250 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-06-22 12:19:23.124261 | orchestrator | 2025-06-22 12:19:23.124271 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-06-22 12:19:23.124282 | orchestrator | Sunday 22 June 2025 12:18:32 +0000 (0:00:03.703) 0:00:04.933 *********** 2025-06-22 12:19:23.124293 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-06-22 12:19:23.124303 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-06-22 12:19:23.124314 | orchestrator | 2025-06-22 12:19:23.124325 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-06-22 12:19:23.124336 | orchestrator | Sunday 22 June 2025 12:18:39 +0000 (0:00:07.153) 0:00:12.087 *********** 2025-06-22 12:19:23.124346 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 12:19:23.124357 | orchestrator | 2025-06-22 12:19:23.124368 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-06-22 12:19:23.124379 | orchestrator | Sunday 22 June 2025 12:18:42 +0000 (0:00:03.236) 0:00:15.324 *********** 2025-06-22 12:19:23.124390 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 12:19:23.124400 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-22 12:19:23.124411 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-22 12:19:23.124422 | orchestrator | 2025-06-22 12:19:23.124432 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-06-22 12:19:23.124443 | orchestrator | Sunday 22 June 2025 12:18:51 +0000 (0:00:08.267) 0:00:23.591 *********** 2025-06-22 12:19:23.124454 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 12:19:23.124464 | orchestrator | 2025-06-22 12:19:23.124475 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-06-22 12:19:23.124485 | orchestrator | Sunday 22 June 2025 12:18:54 +0000 (0:00:03.422) 0:00:27.013 *********** 2025-06-22 12:19:23.124496 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-22 12:19:23.124512 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-22 12:19:23.124563 | orchestrator | 2025-06-22 12:19:23.124575 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-06-22 12:19:23.124593 | orchestrator | Sunday 22 June 2025 12:19:01 +0000 (0:00:07.340) 0:00:34.354 *********** 2025-06-22 12:19:23.124603 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-06-22 12:19:23.124614 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-06-22 12:19:23.124625 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-06-22 12:19:23.124635 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-06-22 12:19:23.124646 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-06-22 12:19:23.124656 | orchestrator | 2025-06-22 12:19:23.124667 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-22 12:19:23.124678 | orchestrator | Sunday 22 June 2025 12:19:18 +0000 (0:00:16.200) 0:00:50.554 *********** 2025-06-22 12:19:23.124689 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:19:23.124699 | orchestrator | 2025-06-22 12:19:23.124710 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-06-22 12:19:23.124721 | orchestrator | Sunday 22 June 2025 12:19:18 +0000 (0:00:00.541) 0:00:51.096 *********** 2025-06-22 12:19:23.124735 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request."} 2025-06-22 12:19:23.124749 | orchestrator | 2025-06-22 12:19:23.124760 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:19:23.124771 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-06-22 12:19:23.124782 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:19:23.124799 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:19:23.124811 | orchestrator | 2025-06-22 12:19:23.124822 | orchestrator | 2025-06-22 12:19:23.124832 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:19:23.124843 | orchestrator | Sunday 22 June 2025 12:19:22 +0000 (0:00:03.569) 0:00:54.665 *********** 2025-06-22 12:19:23.124854 | orchestrator | =============================================================================== 2025-06-22 12:19:23.124864 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.20s 2025-06-22 12:19:23.124875 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.27s 2025-06-22 12:19:23.124886 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.34s 2025-06-22 12:19:23.124896 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.15s 2025-06-22 12:19:23.124907 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.70s 2025-06-22 12:19:23.124918 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.57s 2025-06-22 12:19:23.124928 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.42s 2025-06-22 12:19:23.124939 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.24s 2025-06-22 12:19:23.124950 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.54s 2025-06-22 12:19:23.124960 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.48s 2025-06-22 12:19:23.124971 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.34s 2025-06-22 12:19:23.124988 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.21s 2025-06-22 12:19:26.173013 | orchestrator | 2025-06-22 12:19:26 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:19:26.174731 | orchestrator | 2025-06-22 12:19:26 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:26.176367 | orchestrator | 2025-06-22 12:19:26 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state STARTED 2025-06-22 12:19:26.176404 | orchestrator | 2025-06-22 12:19:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:29.222269 | orchestrator | 2025-06-22 12:19:29 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:19:29.226718 | orchestrator | 2025-06-22 12:19:29 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:29.228729 | orchestrator | 2025-06-22 12:19:29 | INFO  | Task 71c1984b-1cfe-49c1-ae2a-bbe3480e4a61 is in state SUCCESS 2025-06-22 12:19:29.229208 | orchestrator | 2025-06-22 12:19:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:32.272625 | orchestrator | 2025-06-22 12:19:32 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:19:32.275399 | orchestrator | 2025-06-22 12:19:32 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:32.275448 | orchestrator | 2025-06-22 12:19:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:35.313332 | orchestrator | 2025-06-22 12:19:35 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:19:35.313651 | orchestrator | 2025-06-22 12:19:35 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:35.313739 | orchestrator | 2025-06-22 12:19:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:38.363061 | orchestrator | 2025-06-22 12:19:38 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:19:38.364854 | orchestrator | 2025-06-22 12:19:38 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:38.365259 | orchestrator | 2025-06-22 12:19:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:41.406398 | orchestrator | 2025-06-22 12:19:41 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:19:41.407863 | orchestrator | 2025-06-22 12:19:41 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:41.407895 | orchestrator | 2025-06-22 12:19:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:44.459890 | orchestrator | 2025-06-22 12:19:44 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:19:44.460354 | orchestrator | 2025-06-22 12:19:44 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:44.460389 | orchestrator | 2025-06-22 12:19:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:47.503443 | orchestrator | 2025-06-22 12:19:47 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:19:47.504672 | orchestrator | 2025-06-22 12:19:47 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:47.504729 | orchestrator | 2025-06-22 12:19:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:50.544253 | orchestrator | 2025-06-22 12:19:50 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state STARTED 2025-06-22 12:19:50.544358 | orchestrator | 2025-06-22 12:19:50 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:50.544752 | orchestrator | 2025-06-22 12:19:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:53.592886 | orchestrator | 2025-06-22 12:19:53 | INFO  | Task dd79b289-1b86-4709-982f-aa455fc92f08 is in state SUCCESS 2025-06-22 12:19:53.594895 | orchestrator | 2025-06-22 12:19:53.594941 | orchestrator | 2025-06-22 12:19:53.594955 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:19:53.594967 | orchestrator | 2025-06-22 12:19:53.595021 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:19:53.595035 | orchestrator | Sunday 22 June 2025 12:17:49 +0000 (0:00:00.170) 0:00:00.170 *********** 2025-06-22 12:19:53.595047 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:19:53.595090 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:19:53.595102 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:19:53.595172 | orchestrator | 2025-06-22 12:19:53.595186 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:19:53.595232 | orchestrator | Sunday 22 June 2025 12:17:49 +0000 (0:00:00.323) 0:00:00.494 *********** 2025-06-22 12:19:53.595275 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-06-22 12:19:53.595354 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-06-22 12:19:53.595379 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-06-22 12:19:53.595390 | orchestrator | 2025-06-22 12:19:53.595401 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-06-22 12:19:53.595428 | orchestrator | 2025-06-22 12:19:53.595440 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-06-22 12:19:53.595451 | orchestrator | Sunday 22 June 2025 12:17:50 +0000 (0:00:00.696) 0:00:01.190 *********** 2025-06-22 12:19:53.595463 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:19:53.595475 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:19:53.595486 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:19:53.595531 | orchestrator | 2025-06-22 12:19:53.595545 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:19:53.595558 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:19:53.595572 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:19:53.595619 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:19:53.595632 | orchestrator | 2025-06-22 12:19:53.595645 | orchestrator | 2025-06-22 12:19:53.595673 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:19:53.595685 | orchestrator | Sunday 22 June 2025 12:19:28 +0000 (0:01:38.030) 0:01:39.221 *********** 2025-06-22 12:19:53.595695 | orchestrator | =============================================================================== 2025-06-22 12:19:53.595717 | orchestrator | Waiting for Nova public port to be UP ---------------------------------- 98.03s 2025-06-22 12:19:53.595728 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2025-06-22 12:19:53.595738 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-06-22 12:19:53.595749 | orchestrator | 2025-06-22 12:19:53.595759 | orchestrator | 2025-06-22 12:19:53.595770 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:19:53.595780 | orchestrator | 2025-06-22 12:19:53.595800 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:19:53.595811 | orchestrator | Sunday 22 June 2025 12:17:23 +0000 (0:00:00.264) 0:00:00.264 *********** 2025-06-22 12:19:53.595822 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:19:53.595832 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:19:53.595843 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:19:53.595853 | orchestrator | 2025-06-22 12:19:53.595864 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:19:53.595874 | orchestrator | Sunday 22 June 2025 12:17:24 +0000 (0:00:00.314) 0:00:00.579 *********** 2025-06-22 12:19:53.595906 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-06-22 12:19:53.595917 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-06-22 12:19:53.595927 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-06-22 12:19:53.595938 | orchestrator | 2025-06-22 12:19:53.595975 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-06-22 12:19:53.595986 | orchestrator | 2025-06-22 12:19:53.595996 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-22 12:19:53.596008 | orchestrator | Sunday 22 June 2025 12:17:24 +0000 (0:00:00.407) 0:00:00.986 *********** 2025-06-22 12:19:53.596018 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:19:53.596039 | orchestrator | 2025-06-22 12:19:53.596050 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-06-22 12:19:53.596060 | orchestrator | Sunday 22 June 2025 12:17:25 +0000 (0:00:00.507) 0:00:01.493 *********** 2025-06-22 12:19:53.596074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 12:19:53.596105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 12:19:53.596118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 12:19:53.596130 | orchestrator | 2025-06-22 12:19:53.596141 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-06-22 12:19:53.596152 | orchestrator | Sunday 22 June 2025 12:17:25 +0000 (0:00:00.772) 0:00:02.266 *********** 2025-06-22 12:19:53.596163 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-06-22 12:19:53.596174 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-06-22 12:19:53.596185 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 12:19:53.596196 | orchestrator | 2025-06-22 12:19:53.596213 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-22 12:19:53.596224 | orchestrator | Sunday 22 June 2025 12:17:26 +0000 (0:00:00.819) 0:00:03.085 *********** 2025-06-22 12:19:53.596235 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:19:53.596265 | orchestrator | 2025-06-22 12:19:53.596276 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-06-22 12:19:53.596287 | orchestrator | Sunday 22 June 2025 12:17:27 +0000 (0:00:00.766) 0:00:03.852 *********** 2025-06-22 12:19:53.596298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 12:19:53.596310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 12:19:53.596322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 12:19:53.596333 | orchestrator | 2025-06-22 12:19:53.596350 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-06-22 12:19:53.596362 | orchestrator | Sunday 22 June 2025 12:17:28 +0000 (0:00:01.253) 0:00:05.105 *********** 2025-06-22 12:19:53.596373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 12:19:53.596384 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:19:53.596395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 12:19:53.596413 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:19:53.596429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 12:19:53.596441 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:19:53.596452 | orchestrator | 2025-06-22 12:19:53.596462 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-06-22 12:19:53.596473 | orchestrator | Sunday 22 June 2025 12:17:29 +0000 (0:00:00.352) 0:00:05.457 *********** 2025-06-22 12:19:53.596484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 12:19:53.596525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 12:19:53.596537 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:19:53.596548 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:19:53.596566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 12:19:53.596578 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:19:53.596589 | orchestrator | 2025-06-22 12:19:53.596600 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-06-22 12:19:53.596611 | orchestrator | Sunday 22 June 2025 12:17:29 +0000 (0:00:00.802) 0:00:06.260 *********** 2025-06-22 12:19:53.596622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 12:19:53.596645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 12:19:53.596657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 12:19:53.596669 | orchestrator | 2025-06-22 12:19:53.596679 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-06-22 12:19:53.596690 | orchestrator | Sunday 22 June 2025 12:17:31 +0000 (0:00:01.187) 0:00:07.448 *********** 2025-06-22 12:19:53.596701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 12:19:53.596719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 12:19:53.596731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 12:19:53.596748 | orchestrator | 2025-06-22 12:19:53.596759 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-06-22 12:19:53.596770 | orchestrator | Sunday 22 June 2025 12:17:32 +0000 (0:00:01.278) 0:00:08.726 *********** 2025-06-22 12:19:53.596781 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:19:53.596791 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:19:53.596802 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:19:53.596813 | orchestrator | 2025-06-22 12:19:53.596823 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-06-22 12:19:53.596834 | orchestrator | Sunday 22 June 2025 12:17:32 +0000 (0:00:00.509) 0:00:09.235 *********** 2025-06-22 12:19:53.596844 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-22 12:19:53.596855 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-22 12:19:53.596870 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-22 12:19:53.596881 | orchestrator | 2025-06-22 12:19:53.596892 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-06-22 12:19:53.596903 | orchestrator | Sunday 22 June 2025 12:17:34 +0000 (0:00:01.215) 0:00:10.451 *********** 2025-06-22 12:19:53.596913 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-22 12:19:53.596924 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-22 12:19:53.596935 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-22 12:19:53.596945 | orchestrator | 2025-06-22 12:19:53.596956 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-06-22 12:19:53.596966 | orchestrator | Sunday 22 June 2025 12:17:35 +0000 (0:00:01.469) 0:00:11.921 *********** 2025-06-22 12:19:53.596977 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 12:19:53.596988 | orchestrator | 2025-06-22 12:19:53.596998 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-06-22 12:19:53.597009 | orchestrator | Sunday 22 June 2025 12:17:36 +0000 (0:00:00.750) 0:00:12.671 *********** 2025-06-22 12:19:53.597019 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-06-22 12:19:53.597030 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-06-22 12:19:53.597040 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:19:53.597051 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:19:53.597062 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:19:53.597072 | orchestrator | 2025-06-22 12:19:53.597083 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-06-22 12:19:53.597094 | orchestrator | Sunday 22 June 2025 12:17:37 +0000 (0:00:00.730) 0:00:13.401 *********** 2025-06-22 12:19:53.597104 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:19:53.597115 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:19:53.597126 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:19:53.597136 | orchestrator | 2025-06-22 12:19:53.597147 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-06-22 12:19:53.597158 | orchestrator | Sunday 22 June 2025 12:17:37 +0000 (0:00:00.553) 0:00:13.955 *********** 2025-06-22 12:19:53.597169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1055493, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8768246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.597194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1055493, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8768246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.597207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1055493, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8768246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.597222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1055466, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8718245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.597234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1055466, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8718245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.597245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1055466, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8718245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.597256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1055453, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8698246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.597280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1055453, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8698246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.597291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1055453, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8698246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.597302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1055489, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8738246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.597319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1055489, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8738246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.597330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1055489, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8738246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.597341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1055421, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8618243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.597364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1055421, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8618243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.597382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1055421, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8618243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.597394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1055456, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8698246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.597410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1055456, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8698246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.597421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1055456, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8698246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.597433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1055483, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8738246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.597451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1055483, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8738246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1055483, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8738246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1055419, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8608243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1055419, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8608243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1055419, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8608243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1055395, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8558242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1055395, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8558242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1055395, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8558242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1055428, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8628244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1055428, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8628244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1055428, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8628244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1055400, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8588243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1055400, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8588243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1055400, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8588243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1055480, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8728247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1055480, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8728247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1055480, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8728247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1055433, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8638244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1055433, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8638244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1055433, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8638244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1055491, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8738246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1055491, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8738246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1055491, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8738246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1055415, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8608243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1055415, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8608243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1055415, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8608243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1055460, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8708246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1055460, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8708246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1055460, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8708246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1055397, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8568244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1055397, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8568244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1055397, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8568244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1055408, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8598244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1055408, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8598244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1055408, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8598244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1055449, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8688245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.598983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1055449, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8688245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1055449, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8688245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1055557, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.896825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1055557, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.896825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1055557, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.896825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1055544, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.888825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1055544, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.888825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1055544, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.888825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1055514, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8768246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1055514, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8768246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1055514, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8768246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1055595, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9048252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1055595, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9048252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1055595, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9048252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1055516, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8778248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1055516, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8778248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1055516, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8778248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1055583, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9018252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1055583, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9018252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1055583, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9018252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1055601, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9078252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1055601, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9078252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1055601, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9078252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1055574, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8988252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1055574, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8988252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1055574, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8988252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1055582, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9008253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1055582, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9008253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1055582, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9008253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1055518, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8798246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1055518, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8798246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1055518, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8798246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1055551, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8898249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1055551, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8898249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1055551, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8898249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1055609, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9088254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1055609, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9088254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1055609, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9088254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1055590, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.902825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1055590, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.902825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1055590, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.902825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1055524, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.882825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1055524, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.882825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1055524, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.882825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1055522, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8808248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1055522, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8808248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1055522, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8808248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1055530, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8838248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1055530, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8838248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1055530, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8838248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1055533, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.887825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1055533, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.887825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1055533, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.887825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1055553, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8898249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1055553, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8898249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1055579, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8988252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1055553, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8898249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1055579, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8988252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1055579, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.8988252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1055555, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.890825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1055555, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.890825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1055614, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9108253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1055555, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.890825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1055614, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9108253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1055614, 'dev': 91, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750591775.9108253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 12:19:53.599970 | orchestrator | 2025-06-22 12:19:53.599984 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-06-22 12:19:53.599995 | orchestrator | Sunday 22 June 2025 12:18:15 +0000 (0:00:37.514) 0:00:51.470 *********** 2025-06-22 12:19:53.600102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 12:19:53.600126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 12:19:53.600138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 12:19:53.600149 | orchestrator | 2025-06-22 12:19:53.600166 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-06-22 12:19:53.600177 | orchestrator | Sunday 22 June 2025 12:18:16 +0000 (0:00:01.018) 0:00:52.489 *********** 2025-06-22 12:19:53.600188 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:19:53.600199 | orchestrator | 2025-06-22 12:19:53.600210 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-06-22 12:19:53.600220 | orchestrator | Sunday 22 June 2025 12:18:18 +0000 (0:00:02.344) 0:00:54.833 *********** 2025-06-22 12:19:53.600231 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:19:53.600242 | orchestrator | 2025-06-22 12:19:53.600252 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-22 12:19:53.600263 | orchestrator | Sunday 22 June 2025 12:18:20 +0000 (0:00:02.353) 0:00:57.186 *********** 2025-06-22 12:19:53.600273 | orchestrator | 2025-06-22 12:19:53.600284 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-22 12:19:53.600295 | orchestrator | Sunday 22 June 2025 12:18:21 +0000 (0:00:00.276) 0:00:57.462 *********** 2025-06-22 12:19:53.600305 | orchestrator | 2025-06-22 12:19:53.600316 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-22 12:19:53.600326 | orchestrator | Sunday 22 June 2025 12:18:21 +0000 (0:00:00.085) 0:00:57.548 *********** 2025-06-22 12:19:53.600337 | orchestrator | 2025-06-22 12:19:53.600348 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-06-22 12:19:53.600358 | orchestrator | Sunday 22 June 2025 12:18:21 +0000 (0:00:00.069) 0:00:57.617 *********** 2025-06-22 12:19:53.600369 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:19:53.600379 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:19:53.600390 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:19:53.600401 | orchestrator | 2025-06-22 12:19:53.600411 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-06-22 12:19:53.600422 | orchestrator | Sunday 22 June 2025 12:18:23 +0000 (0:00:01.832) 0:00:59.449 *********** 2025-06-22 12:19:53.600433 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:19:53.600443 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:19:53.600454 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-06-22 12:19:53.600466 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-06-22 12:19:53.600482 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-06-22 12:19:53.600513 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:19:53.600525 | orchestrator | 2025-06-22 12:19:53.600535 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-06-22 12:19:53.600546 | orchestrator | Sunday 22 June 2025 12:19:02 +0000 (0:00:38.931) 0:01:38.381 *********** 2025-06-22 12:19:53.600557 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:19:53.600568 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:19:53.600579 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:19:53.600589 | orchestrator | 2025-06-22 12:19:53.600600 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-06-22 12:19:53.600611 | orchestrator | Sunday 22 June 2025 12:19:45 +0000 (0:00:43.623) 0:02:22.005 *********** 2025-06-22 12:19:53.600621 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:19:53.600632 | orchestrator | 2025-06-22 12:19:53.600643 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-06-22 12:19:53.600653 | orchestrator | Sunday 22 June 2025 12:19:48 +0000 (0:00:02.869) 0:02:24.874 *********** 2025-06-22 12:19:53.600664 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:19:53.600675 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:19:53.600686 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:19:53.600697 | orchestrator | 2025-06-22 12:19:53.600714 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-06-22 12:19:53.600726 | orchestrator | Sunday 22 June 2025 12:19:48 +0000 (0:00:00.348) 0:02:25.222 *********** 2025-06-22 12:19:53.600738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-06-22 12:19:53.600750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-06-22 12:19:53.600762 | orchestrator | 2025-06-22 12:19:53.600773 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-06-22 12:19:53.600784 | orchestrator | Sunday 22 June 2025 12:19:51 +0000 (0:00:02.524) 0:02:27.747 *********** 2025-06-22 12:19:53.600794 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:19:53.600805 | orchestrator | 2025-06-22 12:19:53.600816 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:19:53.600828 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 12:19:53.600840 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 12:19:53.600850 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 12:19:53.600861 | orchestrator | 2025-06-22 12:19:53.600872 | orchestrator | 2025-06-22 12:19:53.600887 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:19:53.600898 | orchestrator | Sunday 22 June 2025 12:19:51 +0000 (0:00:00.259) 0:02:28.006 *********** 2025-06-22 12:19:53.600909 | orchestrator | =============================================================================== 2025-06-22 12:19:53.600920 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 43.62s 2025-06-22 12:19:53.600930 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.93s 2025-06-22 12:19:53.600941 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.51s 2025-06-22 12:19:53.600958 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.87s 2025-06-22 12:19:53.600969 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.52s 2025-06-22 12:19:53.600979 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.35s 2025-06-22 12:19:53.600990 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.34s 2025-06-22 12:19:53.601000 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.83s 2025-06-22 12:19:53.601011 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.47s 2025-06-22 12:19:53.601022 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.28s 2025-06-22 12:19:53.601032 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.25s 2025-06-22 12:19:53.601043 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.22s 2025-06-22 12:19:53.601054 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.19s 2025-06-22 12:19:53.601064 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.02s 2025-06-22 12:19:53.601075 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.82s 2025-06-22 12:19:53.601085 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.80s 2025-06-22 12:19:53.601096 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.77s 2025-06-22 12:19:53.601106 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.77s 2025-06-22 12:19:53.601117 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.75s 2025-06-22 12:19:53.601128 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.73s 2025-06-22 12:19:53.601138 | orchestrator | 2025-06-22 12:19:53 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:53.601149 | orchestrator | 2025-06-22 12:19:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:56.643185 | orchestrator | 2025-06-22 12:19:56 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:56.643293 | orchestrator | 2025-06-22 12:19:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:19:59.693922 | orchestrator | 2025-06-22 12:19:59 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:19:59.694127 | orchestrator | 2025-06-22 12:19:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:20:02.748828 | orchestrator | 2025-06-22 12:20:02 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:20:02.748936 | orchestrator | 2025-06-22 12:20:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:20:05.795758 | orchestrator | 2025-06-22 12:20:05 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:20:05.795865 | orchestrator | 2025-06-22 12:20:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:20:08.846221 | orchestrator | 2025-06-22 12:20:08 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:20:08.846324 | orchestrator | 2025-06-22 12:20:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:20:11.894433 | orchestrator | 2025-06-22 12:20:11 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:20:11.894825 | orchestrator | 2025-06-22 12:20:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:20:14.942424 | orchestrator | 2025-06-22 12:20:14 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:20:14.942588 | orchestrator | 2025-06-22 12:20:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:20:17.985017 | orchestrator | 2025-06-22 12:20:17 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:20:17.985147 | orchestrator | 2025-06-22 12:20:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:20:21.045200 | orchestrator | 2025-06-22 12:20:21 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:20:21.045308 | orchestrator | 2025-06-22 12:20:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:20:24.085718 | orchestrator | 2025-06-22 12:20:24 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:20:24.085829 | orchestrator | 2025-06-22 12:20:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:20:27.133506 | orchestrator | 2025-06-22 12:20:27 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:20:27.133603 | orchestrator | 2025-06-22 12:20:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:20:30.182803 | orchestrator | 2025-06-22 12:20:30 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:20:30.182907 | orchestrator | 2025-06-22 12:20:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:20:33.243740 | orchestrator | 2025-06-22 12:20:33 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:20:33.243850 | orchestrator | 2025-06-22 12:20:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:20:36.282612 | orchestrator | 2025-06-22 12:20:36 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:20:36.282714 | orchestrator | 2025-06-22 12:20:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:20:39.327717 | orchestrator | 2025-06-22 12:20:39 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:20:39.327847 | orchestrator | 2025-06-22 12:20:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:20:42.374237 | orchestrator | 2025-06-22 12:20:42 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:20:42.374339 | orchestrator | 2025-06-22 12:20:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:20:45.421226 | orchestrator | 2025-06-22 12:20:45 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:20:45.421332 | orchestrator | 2025-06-22 12:20:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:20:48.460853 | orchestrator | 2025-06-22 12:20:48 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:20:48.460942 | orchestrator | 2025-06-22 12:20:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:20:51.500268 | orchestrator | 2025-06-22 12:20:51 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:20:51.500372 | orchestrator | 2025-06-22 12:20:51 | INFO  | Task 76488cde-375b-4762-bd86-1ec1a43695a9 is in state STARTED 2025-06-22 12:20:51.500395 | orchestrator | 2025-06-22 12:20:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:20:54.539238 | orchestrator | 2025-06-22 12:20:54 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:20:54.539535 | orchestrator | 2025-06-22 12:20:54 | INFO  | Task 76488cde-375b-4762-bd86-1ec1a43695a9 is in state STARTED 2025-06-22 12:20:54.539564 | orchestrator | 2025-06-22 12:20:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:20:57.589897 | orchestrator | 2025-06-22 12:20:57 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:20:57.590976 | orchestrator | 2025-06-22 12:20:57 | INFO  | Task 76488cde-375b-4762-bd86-1ec1a43695a9 is in state STARTED 2025-06-22 12:20:57.591086 | orchestrator | 2025-06-22 12:20:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:00.640331 | orchestrator | 2025-06-22 12:21:00 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:00.641287 | orchestrator | 2025-06-22 12:21:00 | INFO  | Task 76488cde-375b-4762-bd86-1ec1a43695a9 is in state STARTED 2025-06-22 12:21:00.641320 | orchestrator | 2025-06-22 12:21:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:03.699553 | orchestrator | 2025-06-22 12:21:03 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:03.699659 | orchestrator | 2025-06-22 12:21:03 | INFO  | Task 76488cde-375b-4762-bd86-1ec1a43695a9 is in state STARTED 2025-06-22 12:21:03.699672 | orchestrator | 2025-06-22 12:21:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:06.745666 | orchestrator | 2025-06-22 12:21:06 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:06.747036 | orchestrator | 2025-06-22 12:21:06 | INFO  | Task 76488cde-375b-4762-bd86-1ec1a43695a9 is in state STARTED 2025-06-22 12:21:06.747089 | orchestrator | 2025-06-22 12:21:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:09.797638 | orchestrator | 2025-06-22 12:21:09 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:09.798837 | orchestrator | 2025-06-22 12:21:09 | INFO  | Task 76488cde-375b-4762-bd86-1ec1a43695a9 is in state SUCCESS 2025-06-22 12:21:09.799046 | orchestrator | 2025-06-22 12:21:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:12.842514 | orchestrator | 2025-06-22 12:21:12 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:12.842614 | orchestrator | 2025-06-22 12:21:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:15.887549 | orchestrator | 2025-06-22 12:21:15 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:15.887655 | orchestrator | 2025-06-22 12:21:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:18.932969 | orchestrator | 2025-06-22 12:21:18 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:18.933075 | orchestrator | 2025-06-22 12:21:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:21.990617 | orchestrator | 2025-06-22 12:21:21 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:21.990720 | orchestrator | 2025-06-22 12:21:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:25.052718 | orchestrator | 2025-06-22 12:21:25 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:25.052824 | orchestrator | 2025-06-22 12:21:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:28.099396 | orchestrator | 2025-06-22 12:21:28 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:28.099558 | orchestrator | 2025-06-22 12:21:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:31.149863 | orchestrator | 2025-06-22 12:21:31 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:31.149970 | orchestrator | 2025-06-22 12:21:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:34.190466 | orchestrator | 2025-06-22 12:21:34 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:34.190620 | orchestrator | 2025-06-22 12:21:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:37.242073 | orchestrator | 2025-06-22 12:21:37 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:37.242205 | orchestrator | 2025-06-22 12:21:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:40.284841 | orchestrator | 2025-06-22 12:21:40 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:40.284947 | orchestrator | 2025-06-22 12:21:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:43.331574 | orchestrator | 2025-06-22 12:21:43 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:43.331680 | orchestrator | 2025-06-22 12:21:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:46.385375 | orchestrator | 2025-06-22 12:21:46 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:46.385488 | orchestrator | 2025-06-22 12:21:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:49.429001 | orchestrator | 2025-06-22 12:21:49 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:49.429113 | orchestrator | 2025-06-22 12:21:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:52.468497 | orchestrator | 2025-06-22 12:21:52 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:52.468660 | orchestrator | 2025-06-22 12:21:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:55.517816 | orchestrator | 2025-06-22 12:21:55 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:55.517925 | orchestrator | 2025-06-22 12:21:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:21:58.566069 | orchestrator | 2025-06-22 12:21:58 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:21:58.566173 | orchestrator | 2025-06-22 12:21:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:01.609755 | orchestrator | 2025-06-22 12:22:01 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:01.609865 | orchestrator | 2025-06-22 12:22:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:04.657333 | orchestrator | 2025-06-22 12:22:04 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:04.657442 | orchestrator | 2025-06-22 12:22:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:07.703908 | orchestrator | 2025-06-22 12:22:07 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:07.704005 | orchestrator | 2025-06-22 12:22:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:10.749660 | orchestrator | 2025-06-22 12:22:10 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:10.749770 | orchestrator | 2025-06-22 12:22:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:13.795539 | orchestrator | 2025-06-22 12:22:13 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:13.795684 | orchestrator | 2025-06-22 12:22:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:16.839310 | orchestrator | 2025-06-22 12:22:16 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:16.839416 | orchestrator | 2025-06-22 12:22:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:19.882336 | orchestrator | 2025-06-22 12:22:19 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:19.882450 | orchestrator | 2025-06-22 12:22:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:22.931121 | orchestrator | 2025-06-22 12:22:22 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:22.931276 | orchestrator | 2025-06-22 12:22:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:25.976671 | orchestrator | 2025-06-22 12:22:25 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:25.976785 | orchestrator | 2025-06-22 12:22:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:29.038820 | orchestrator | 2025-06-22 12:22:29 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:29.038918 | orchestrator | 2025-06-22 12:22:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:32.096268 | orchestrator | 2025-06-22 12:22:32 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:32.096380 | orchestrator | 2025-06-22 12:22:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:35.156957 | orchestrator | 2025-06-22 12:22:35 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:35.157067 | orchestrator | 2025-06-22 12:22:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:38.206765 | orchestrator | 2025-06-22 12:22:38 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:38.206867 | orchestrator | 2025-06-22 12:22:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:41.248469 | orchestrator | 2025-06-22 12:22:41 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:41.248598 | orchestrator | 2025-06-22 12:22:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:44.296343 | orchestrator | 2025-06-22 12:22:44 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:44.296450 | orchestrator | 2025-06-22 12:22:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:47.340788 | orchestrator | 2025-06-22 12:22:47 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:47.340894 | orchestrator | 2025-06-22 12:22:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:50.385744 | orchestrator | 2025-06-22 12:22:50 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:50.385836 | orchestrator | 2025-06-22 12:22:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:53.436213 | orchestrator | 2025-06-22 12:22:53 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:53.438903 | orchestrator | 2025-06-22 12:22:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:56.479623 | orchestrator | 2025-06-22 12:22:56 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:56.479689 | orchestrator | 2025-06-22 12:22:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:22:59.515356 | orchestrator | 2025-06-22 12:22:59 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:22:59.515444 | orchestrator | 2025-06-22 12:22:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:23:02.558389 | orchestrator | 2025-06-22 12:23:02 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:23:02.558516 | orchestrator | 2025-06-22 12:23:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:23:05.612659 | orchestrator | 2025-06-22 12:23:05 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:23:05.612803 | orchestrator | 2025-06-22 12:23:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:23:08.668840 | orchestrator | 2025-06-22 12:23:08 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:23:08.668956 | orchestrator | 2025-06-22 12:23:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:23:11.714119 | orchestrator | 2025-06-22 12:23:11 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:23:11.714223 | orchestrator | 2025-06-22 12:23:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:23:14.761174 | orchestrator | 2025-06-22 12:23:14 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:23:14.761287 | orchestrator | 2025-06-22 12:23:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:23:17.803265 | orchestrator | 2025-06-22 12:23:17 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:23:17.803368 | orchestrator | 2025-06-22 12:23:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:23:20.828484 | orchestrator | 2025-06-22 12:23:20 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:23:20.828576 | orchestrator | 2025-06-22 12:23:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:23:23.865456 | orchestrator | 2025-06-22 12:23:23 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:23:23.865543 | orchestrator | 2025-06-22 12:23:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:23:26.909311 | orchestrator | 2025-06-22 12:23:26 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:23:26.909406 | orchestrator | 2025-06-22 12:23:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:23:29.963069 | orchestrator | 2025-06-22 12:23:29 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:23:29.963173 | orchestrator | 2025-06-22 12:23:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:23:33.011743 | orchestrator | 2025-06-22 12:23:33 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:23:33.011856 | orchestrator | 2025-06-22 12:23:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:23:36.062830 | orchestrator | 2025-06-22 12:23:36 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:23:36.062934 | orchestrator | 2025-06-22 12:23:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:23:39.100632 | orchestrator | 2025-06-22 12:23:39 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:23:39.100797 | orchestrator | 2025-06-22 12:23:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:23:42.148296 | orchestrator | 2025-06-22 12:23:42 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:23:42.148416 | orchestrator | 2025-06-22 12:23:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:23:45.196123 | orchestrator | 2025-06-22 12:23:45 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:23:45.196245 | orchestrator | 2025-06-22 12:23:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:23:48.232300 | orchestrator | 2025-06-22 12:23:48 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:23:48.232410 | orchestrator | 2025-06-22 12:23:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:23:51.280868 | orchestrator | 2025-06-22 12:23:51 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:23:51.280972 | orchestrator | 2025-06-22 12:23:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:23:54.322737 | orchestrator | 2025-06-22 12:23:54 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:23:54.322843 | orchestrator | 2025-06-22 12:23:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:23:57.366155 | orchestrator | 2025-06-22 12:23:57 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state STARTED 2025-06-22 12:23:57.366276 | orchestrator | 2025-06-22 12:23:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 12:24:00.418948 | orchestrator | 2025-06-22 12:24:00 | INFO  | Task af65689d-4099-41af-9848-5306fd0514ea is in state SUCCESS 2025-06-22 12:24:00.421520 | orchestrator | 2025-06-22 12:24:00.421567 | orchestrator | None 2025-06-22 12:24:00.421581 | orchestrator | 2025-06-22 12:24:00.421593 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:24:00.421604 | orchestrator | 2025-06-22 12:24:00.421616 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-06-22 12:24:00.421644 | orchestrator | Sunday 22 June 2025 12:15:24 +0000 (0:00:00.328) 0:00:00.328 *********** 2025-06-22 12:24:00.421656 | orchestrator | changed: [testbed-manager] 2025-06-22 12:24:00.421694 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:24:00.421709 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:24:00.421720 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:24:00.421731 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:24:00.421742 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:24:00.421753 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:24:00.421763 | orchestrator | 2025-06-22 12:24:00.421774 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:24:00.421785 | orchestrator | Sunday 22 June 2025 12:15:25 +0000 (0:00:00.744) 0:00:01.073 *********** 2025-06-22 12:24:00.421797 | orchestrator | changed: [testbed-manager] 2025-06-22 12:24:00.421807 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:24:00.421818 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:24:00.421829 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:24:00.421839 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:24:00.421850 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:24:00.421861 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:24:00.421871 | orchestrator | 2025-06-22 12:24:00.421882 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:24:00.421893 | orchestrator | Sunday 22 June 2025 12:15:25 +0000 (0:00:00.627) 0:00:01.700 *********** 2025-06-22 12:24:00.421904 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-06-22 12:24:00.421965 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-06-22 12:24:00.422260 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-06-22 12:24:00.422275 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-06-22 12:24:00.422288 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-06-22 12:24:00.422301 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-06-22 12:24:00.422314 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-06-22 12:24:00.422327 | orchestrator | 2025-06-22 12:24:00.422339 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-06-22 12:24:00.422352 | orchestrator | 2025-06-22 12:24:00.422365 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-22 12:24:00.422377 | orchestrator | Sunday 22 June 2025 12:15:26 +0000 (0:00:00.848) 0:00:02.549 *********** 2025-06-22 12:24:00.422389 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:24:00.422401 | orchestrator | 2025-06-22 12:24:00.422414 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-06-22 12:24:00.422427 | orchestrator | Sunday 22 June 2025 12:15:27 +0000 (0:00:00.624) 0:00:03.173 *********** 2025-06-22 12:24:00.422440 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-06-22 12:24:00.422452 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-06-22 12:24:00.422486 | orchestrator | 2025-06-22 12:24:00.422498 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-06-22 12:24:00.422509 | orchestrator | Sunday 22 June 2025 12:15:31 +0000 (0:00:04.284) 0:00:07.458 *********** 2025-06-22 12:24:00.422520 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 12:24:00.422531 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 12:24:00.422602 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:24:00.422614 | orchestrator | 2025-06-22 12:24:00.422625 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-22 12:24:00.422636 | orchestrator | Sunday 22 June 2025 12:15:35 +0000 (0:00:04.175) 0:00:11.634 *********** 2025-06-22 12:24:00.422757 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:24:00.422789 | orchestrator | 2025-06-22 12:24:00.422801 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-06-22 12:24:00.422812 | orchestrator | Sunday 22 June 2025 12:15:36 +0000 (0:00:00.665) 0:00:12.299 *********** 2025-06-22 12:24:00.422823 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:24:00.422833 | orchestrator | 2025-06-22 12:24:00.422844 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-06-22 12:24:00.422855 | orchestrator | Sunday 22 June 2025 12:15:37 +0000 (0:00:01.226) 0:00:13.526 *********** 2025-06-22 12:24:00.422866 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:24:00.422876 | orchestrator | 2025-06-22 12:24:00.422887 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-22 12:24:00.422898 | orchestrator | Sunday 22 June 2025 12:15:40 +0000 (0:00:02.868) 0:00:16.395 *********** 2025-06-22 12:24:00.422908 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.422919 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.422930 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.422941 | orchestrator | 2025-06-22 12:24:00.422951 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-22 12:24:00.422962 | orchestrator | Sunday 22 June 2025 12:15:40 +0000 (0:00:00.234) 0:00:16.629 *********** 2025-06-22 12:24:00.422973 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:24:00.422984 | orchestrator | 2025-06-22 12:24:00.422994 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-06-22 12:24:00.423005 | orchestrator | Sunday 22 June 2025 12:16:10 +0000 (0:00:29.335) 0:00:45.965 *********** 2025-06-22 12:24:00.423026 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:24:00.423037 | orchestrator | 2025-06-22 12:24:00.423048 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-22 12:24:00.423059 | orchestrator | Sunday 22 June 2025 12:16:24 +0000 (0:00:14.552) 0:01:00.518 *********** 2025-06-22 12:24:00.423070 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:24:00.423081 | orchestrator | 2025-06-22 12:24:00.423091 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-22 12:24:00.423160 | orchestrator | Sunday 22 June 2025 12:16:36 +0000 (0:00:12.046) 0:01:12.564 *********** 2025-06-22 12:24:00.423185 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:24:00.423196 | orchestrator | 2025-06-22 12:24:00.423207 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-06-22 12:24:00.423218 | orchestrator | Sunday 22 June 2025 12:16:37 +0000 (0:00:01.176) 0:01:13.741 *********** 2025-06-22 12:24:00.423236 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.423247 | orchestrator | 2025-06-22 12:24:00.423258 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-22 12:24:00.423268 | orchestrator | Sunday 22 June 2025 12:16:38 +0000 (0:00:00.471) 0:01:14.212 *********** 2025-06-22 12:24:00.423279 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:24:00.423303 | orchestrator | 2025-06-22 12:24:00.423315 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-22 12:24:00.423325 | orchestrator | Sunday 22 June 2025 12:16:38 +0000 (0:00:00.487) 0:01:14.700 *********** 2025-06-22 12:24:00.423347 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:24:00.423358 | orchestrator | 2025-06-22 12:24:00.423369 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-22 12:24:00.423380 | orchestrator | Sunday 22 June 2025 12:16:58 +0000 (0:00:19.551) 0:01:34.252 *********** 2025-06-22 12:24:00.423390 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.423401 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.423412 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.423423 | orchestrator | 2025-06-22 12:24:00.423434 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-06-22 12:24:00.423444 | orchestrator | 2025-06-22 12:24:00.423455 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-22 12:24:00.423466 | orchestrator | Sunday 22 June 2025 12:16:59 +0000 (0:00:00.569) 0:01:34.822 *********** 2025-06-22 12:24:00.423477 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:24:00.423488 | orchestrator | 2025-06-22 12:24:00.423498 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-06-22 12:24:00.423509 | orchestrator | Sunday 22 June 2025 12:16:59 +0000 (0:00:00.928) 0:01:35.750 *********** 2025-06-22 12:24:00.423521 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.423532 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.423543 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:24:00.423554 | orchestrator | 2025-06-22 12:24:00.423564 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-06-22 12:24:00.423575 | orchestrator | Sunday 22 June 2025 12:17:02 +0000 (0:00:02.289) 0:01:38.040 *********** 2025-06-22 12:24:00.423586 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.423597 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.423608 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:24:00.423619 | orchestrator | 2025-06-22 12:24:00.423630 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-22 12:24:00.423640 | orchestrator | Sunday 22 June 2025 12:17:04 +0000 (0:00:02.601) 0:01:40.642 *********** 2025-06-22 12:24:00.423651 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.423662 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.423700 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.423712 | orchestrator | 2025-06-22 12:24:00.423723 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-22 12:24:00.423734 | orchestrator | Sunday 22 June 2025 12:17:05 +0000 (0:00:00.546) 0:01:41.189 *********** 2025-06-22 12:24:00.423745 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-22 12:24:00.423756 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.423766 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-22 12:24:00.423777 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.423788 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-22 12:24:00.423799 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-06-22 12:24:00.423810 | orchestrator | 2025-06-22 12:24:00.423821 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-22 12:24:00.423832 | orchestrator | Sunday 22 June 2025 12:17:13 +0000 (0:00:07.891) 0:01:49.080 *********** 2025-06-22 12:24:00.423843 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.423854 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.423865 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.423875 | orchestrator | 2025-06-22 12:24:00.423886 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-22 12:24:00.423897 | orchestrator | Sunday 22 June 2025 12:17:13 +0000 (0:00:00.298) 0:01:49.378 *********** 2025-06-22 12:24:00.423908 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-22 12:24:00.423919 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.423930 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-22 12:24:00.423941 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.423958 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-22 12:24:00.423969 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.423980 | orchestrator | 2025-06-22 12:24:00.423991 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-22 12:24:00.424002 | orchestrator | Sunday 22 June 2025 12:17:14 +0000 (0:00:00.629) 0:01:50.008 *********** 2025-06-22 12:24:00.424012 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.424023 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:24:00.424034 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.424045 | orchestrator | 2025-06-22 12:24:00.424056 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-06-22 12:24:00.424067 | orchestrator | Sunday 22 June 2025 12:17:14 +0000 (0:00:00.534) 0:01:50.543 *********** 2025-06-22 12:24:00.424078 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.424089 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.424099 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:24:00.424110 | orchestrator | 2025-06-22 12:24:00.424126 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-06-22 12:24:00.424145 | orchestrator | Sunday 22 June 2025 12:17:15 +0000 (0:00:01.099) 0:01:51.643 *********** 2025-06-22 12:24:00.424172 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.424205 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.424224 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:24:00.424243 | orchestrator | 2025-06-22 12:24:00.424262 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-06-22 12:24:00.424289 | orchestrator | Sunday 22 June 2025 12:17:17 +0000 (0:00:02.013) 0:01:53.657 *********** 2025-06-22 12:24:00.424303 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.424314 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.424325 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:24:00.424335 | orchestrator | 2025-06-22 12:24:00.424346 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-22 12:24:00.424357 | orchestrator | Sunday 22 June 2025 12:17:38 +0000 (0:00:20.536) 0:02:14.194 *********** 2025-06-22 12:24:00.424368 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.424378 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.424389 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:24:00.424400 | orchestrator | 2025-06-22 12:24:00.424411 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-22 12:24:00.424421 | orchestrator | Sunday 22 June 2025 12:17:50 +0000 (0:00:11.941) 0:02:26.135 *********** 2025-06-22 12:24:00.424432 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:24:00.424442 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.424453 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.424464 | orchestrator | 2025-06-22 12:24:00.424474 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-06-22 12:24:00.424485 | orchestrator | Sunday 22 June 2025 12:17:51 +0000 (0:00:01.256) 0:02:27.392 *********** 2025-06-22 12:24:00.424496 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.424507 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.424517 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:24:00.424528 | orchestrator | 2025-06-22 12:24:00.424539 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-06-22 12:24:00.424550 | orchestrator | Sunday 22 June 2025 12:18:04 +0000 (0:00:12.420) 0:02:39.813 *********** 2025-06-22 12:24:00.424561 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.424571 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.424582 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.424593 | orchestrator | 2025-06-22 12:24:00.424603 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-22 12:24:00.424614 | orchestrator | Sunday 22 June 2025 12:18:05 +0000 (0:00:01.600) 0:02:41.414 *********** 2025-06-22 12:24:00.424625 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.424645 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.424655 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.424666 | orchestrator | 2025-06-22 12:24:00.424765 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-06-22 12:24:00.424776 | orchestrator | 2025-06-22 12:24:00.424787 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-22 12:24:00.424797 | orchestrator | Sunday 22 June 2025 12:18:05 +0000 (0:00:00.354) 0:02:41.769 *********** 2025-06-22 12:24:00.424808 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:24:00.424820 | orchestrator | 2025-06-22 12:24:00.424831 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-06-22 12:24:00.424841 | orchestrator | Sunday 22 June 2025 12:18:06 +0000 (0:00:00.560) 0:02:42.329 *********** 2025-06-22 12:24:00.424850 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-06-22 12:24:00.424860 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-06-22 12:24:00.424869 | orchestrator | 2025-06-22 12:24:00.424879 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-06-22 12:24:00.424889 | orchestrator | Sunday 22 June 2025 12:18:09 +0000 (0:00:03.403) 0:02:45.732 *********** 2025-06-22 12:24:00.424898 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-06-22 12:24:00.424909 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-06-22 12:24:00.424919 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-06-22 12:24:00.424929 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-06-22 12:24:00.424938 | orchestrator | 2025-06-22 12:24:00.424948 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-06-22 12:24:00.424957 | orchestrator | Sunday 22 June 2025 12:18:17 +0000 (0:00:07.114) 0:02:52.847 *********** 2025-06-22 12:24:00.424967 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 12:24:00.424976 | orchestrator | 2025-06-22 12:24:00.424986 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-06-22 12:24:00.424995 | orchestrator | Sunday 22 June 2025 12:18:20 +0000 (0:00:03.440) 0:02:56.288 *********** 2025-06-22 12:24:00.425005 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 12:24:00.425015 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-06-22 12:24:00.425024 | orchestrator | 2025-06-22 12:24:00.425034 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-06-22 12:24:00.425043 | orchestrator | Sunday 22 June 2025 12:18:24 +0000 (0:00:04.120) 0:03:00.408 *********** 2025-06-22 12:24:00.425053 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 12:24:00.425062 | orchestrator | 2025-06-22 12:24:00.425072 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-06-22 12:24:00.425081 | orchestrator | Sunday 22 June 2025 12:18:27 +0000 (0:00:03.274) 0:03:03.683 *********** 2025-06-22 12:24:00.425091 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-06-22 12:24:00.425100 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-06-22 12:24:00.425110 | orchestrator | 2025-06-22 12:24:00.425119 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-22 12:24:00.425136 | orchestrator | Sunday 22 June 2025 12:18:35 +0000 (0:00:07.775) 0:03:11.458 *********** 2025-06-22 12:24:00.425158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 12:24:00.425182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 12:24:00.425195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 12:24:00.425219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.425238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.425249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.425259 | orchestrator | 2025-06-22 12:24:00.425269 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-06-22 12:24:00.425279 | orchestrator | Sunday 22 June 2025 12:18:36 +0000 (0:00:01.226) 0:03:12.685 *********** 2025-06-22 12:24:00.425289 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.425298 | orchestrator | 2025-06-22 12:24:00.425308 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-06-22 12:24:00.425318 | orchestrator | Sunday 22 June 2025 12:18:37 +0000 (0:00:00.133) 0:03:12.819 *********** 2025-06-22 12:24:00.425327 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.425337 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.425347 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.425357 | orchestrator | 2025-06-22 12:24:00.425366 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-06-22 12:24:00.425376 | orchestrator | Sunday 22 June 2025 12:18:37 +0000 (0:00:00.534) 0:03:13.354 *********** 2025-06-22 12:24:00.425386 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 12:24:00.425395 | orchestrator | 2025-06-22 12:24:00.425405 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-06-22 12:24:00.425415 | orchestrator | Sunday 22 June 2025 12:18:38 +0000 (0:00:00.665) 0:03:14.019 *********** 2025-06-22 12:24:00.425424 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.425434 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.425444 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.425453 | orchestrator | 2025-06-22 12:24:00.425463 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-22 12:24:00.425472 | orchestrator | Sunday 22 June 2025 12:18:38 +0000 (0:00:00.301) 0:03:14.321 *********** 2025-06-22 12:24:00.425482 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:24:00.425492 | orchestrator | 2025-06-22 12:24:00.425501 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-22 12:24:00.425511 | orchestrator | Sunday 22 June 2025 12:18:39 +0000 (0:00:00.694) 0:03:15.015 *********** 2025-06-22 12:24:00.425528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 12:24:00.425550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 12:24:00.425562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 12:24:00.425574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.425585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.425618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.425629 | orchestrator | 2025-06-22 12:24:00.425638 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-22 12:24:00.425648 | orchestrator | Sunday 22 June 2025 12:18:41 +0000 (0:00:02.302) 0:03:17.318 *********** 2025-06-22 12:24:00.425658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 12:24:00.425731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.425750 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.425762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 12:24:00.425786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.425797 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.425812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 12:24:00.425823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.425833 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.425843 | orchestrator | 2025-06-22 12:24:00.425853 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-22 12:24:00.425863 | orchestrator | Sunday 22 June 2025 12:18:42 +0000 (0:00:00.574) 0:03:17.893 *********** 2025-06-22 12:24:00.425873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 12:24:00.425889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.425900 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.425921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 12:24:00.425933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.425944 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.425954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 12:24:00.425970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.425980 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.425990 | orchestrator | 2025-06-22 12:24:00.426000 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-06-22 12:24:00.426009 | orchestrator | Sunday 22 June 2025 12:18:43 +0000 (0:00:00.955) 0:03:18.849 *********** 2025-06-22 12:24:00.426071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 12:24:00.426087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 12:24:00.426099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 12:24:00.426122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.426134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.426144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.426154 | orchestrator | 2025-06-22 12:24:00.426164 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-06-22 12:24:00.426173 | orchestrator | Sunday 22 June 2025 12:18:45 +0000 (0:00:02.390) 0:03:21.239 *********** 2025-06-22 12:24:00.426183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 12:24:00.426197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 12:24:00.426246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 12:24:00.426257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.426265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.426280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.426289 | orchestrator | 2025-06-22 12:24:00.426297 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-06-22 12:24:00.426305 | orchestrator | Sunday 22 June 2025 12:18:50 +0000 (0:00:05.401) 0:03:26.641 *********** 2025-06-22 12:24:00.426319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 12:24:00.426332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.426340 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.426349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 12:24:00.426362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.426370 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.426385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 12:24:00.426413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.426429 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.426443 | orchestrator | 2025-06-22 12:24:00.426457 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-06-22 12:24:00.426467 | orchestrator | Sunday 22 June 2025 12:18:51 +0000 (0:00:00.583) 0:03:27.225 *********** 2025-06-22 12:24:00.426475 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:24:00.426483 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:24:00.426491 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:24:00.426499 | orchestrator | 2025-06-22 12:24:00.426506 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-06-22 12:24:00.426514 | orchestrator | Sunday 22 June 2025 12:18:53 +0000 (0:00:01.972) 0:03:29.197 *********** 2025-06-22 12:24:00.426522 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.426530 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.426538 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.426545 | orchestrator | 2025-06-22 12:24:00.426553 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-06-22 12:24:00.426561 | orchestrator | Sunday 22 June 2025 12:18:53 +0000 (0:00:00.338) 0:03:29.536 *********** 2025-06-22 12:24:00.426570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 12:24:00.426588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 12:24:00.426610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 12:24:00.426619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.426633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.426641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.426649 | orchestrator | 2025-06-22 12:24:00.426657 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-22 12:24:00.426665 | orchestrator | Sunday 22 June 2025 12:18:55 +0000 (0:00:01.851) 0:03:31.388 *********** 2025-06-22 12:24:00.426698 | orchestrator | 2025-06-22 12:24:00.426706 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-22 12:24:00.426714 | orchestrator | Sunday 22 June 2025 12:18:55 +0000 (0:00:00.130) 0:03:31.518 *********** 2025-06-22 12:24:00.426722 | orchestrator | 2025-06-22 12:24:00.426730 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-22 12:24:00.426737 | orchestrator | Sunday 22 June 2025 12:18:55 +0000 (0:00:00.124) 0:03:31.643 *********** 2025-06-22 12:24:00.426745 | orchestrator | 2025-06-22 12:24:00.426789 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-06-22 12:24:00.426797 | orchestrator | Sunday 22 June 2025 12:18:56 +0000 (0:00:00.282) 0:03:31.925 *********** 2025-06-22 12:24:00.426805 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:24:00.426813 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:24:00.426820 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:24:00.426828 | orchestrator | 2025-06-22 12:24:00.426836 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-06-22 12:24:00.426844 | orchestrator | Sunday 22 June 2025 12:19:19 +0000 (0:00:23.764) 0:03:55.690 *********** 2025-06-22 12:24:00.426869 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:24:00.426877 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:24:00.426885 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:24:00.426893 | orchestrator | 2025-06-22 12:24:00.426901 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-06-22 12:24:00.426909 | orchestrator | 2025-06-22 12:24:00.426917 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 12:24:00.426925 | orchestrator | Sunday 22 June 2025 12:19:30 +0000 (0:00:10.874) 0:04:06.565 *********** 2025-06-22 12:24:00.426933 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:24:00.426942 | orchestrator | 2025-06-22 12:24:00.426955 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 12:24:00.426963 | orchestrator | Sunday 22 June 2025 12:19:31 +0000 (0:00:01.196) 0:04:07.761 *********** 2025-06-22 12:24:00.426971 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.426979 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:24:00.427049 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:24:00.427058 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.427075 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.427083 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.427139 | orchestrator | 2025-06-22 12:24:00.427149 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-06-22 12:24:00.427157 | orchestrator | Sunday 22 June 2025 12:19:32 +0000 (0:00:00.768) 0:04:08.530 *********** 2025-06-22 12:24:00.427165 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.427173 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.427181 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.427189 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:24:00.427197 | orchestrator | 2025-06-22 12:24:00.427204 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-22 12:24:00.427212 | orchestrator | Sunday 22 June 2025 12:19:33 +0000 (0:00:00.963) 0:04:09.493 *********** 2025-06-22 12:24:00.427220 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-06-22 12:24:00.427228 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-06-22 12:24:00.427236 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-06-22 12:24:00.427244 | orchestrator | 2025-06-22 12:24:00.427252 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-22 12:24:00.427260 | orchestrator | Sunday 22 June 2025 12:19:34 +0000 (0:00:00.699) 0:04:10.193 *********** 2025-06-22 12:24:00.427267 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-06-22 12:24:00.427275 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-06-22 12:24:00.427283 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-06-22 12:24:00.427291 | orchestrator | 2025-06-22 12:24:00.427298 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-22 12:24:00.427306 | orchestrator | Sunday 22 June 2025 12:19:35 +0000 (0:00:01.221) 0:04:11.414 *********** 2025-06-22 12:24:00.427314 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-06-22 12:24:00.427322 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.427330 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-06-22 12:24:00.427337 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:24:00.427345 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-06-22 12:24:00.427353 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:24:00.427361 | orchestrator | 2025-06-22 12:24:00.427369 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-06-22 12:24:00.427376 | orchestrator | Sunday 22 June 2025 12:19:36 +0000 (0:00:00.711) 0:04:12.125 *********** 2025-06-22 12:24:00.427384 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 12:24:00.427392 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 12:24:00.427400 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.427408 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 12:24:00.427415 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 12:24:00.427423 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.427431 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 12:24:00.427439 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 12:24:00.427447 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.427455 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-22 12:24:00.427463 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-22 12:24:00.427470 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-22 12:24:00.427478 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-22 12:24:00.427486 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-22 12:24:00.427499 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-22 12:24:00.427507 | orchestrator | 2025-06-22 12:24:00.427515 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-06-22 12:24:00.427522 | orchestrator | Sunday 22 June 2025 12:19:37 +0000 (0:00:01.092) 0:04:13.217 *********** 2025-06-22 12:24:00.427530 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.427538 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.427546 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.427554 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:24:00.427561 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:24:00.427569 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:24:00.427577 | orchestrator | 2025-06-22 12:24:00.427585 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-06-22 12:24:00.427593 | orchestrator | Sunday 22 June 2025 12:19:38 +0000 (0:00:01.322) 0:04:14.540 *********** 2025-06-22 12:24:00.427601 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.427608 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.427616 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.427624 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:24:00.427632 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:24:00.427639 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:24:00.427647 | orchestrator | 2025-06-22 12:24:00.427655 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-22 12:24:00.427663 | orchestrator | Sunday 22 June 2025 12:19:40 +0000 (0:00:01.728) 0:04:16.269 *********** 2025-06-22 12:24:00.427701 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427714 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427722 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427745 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427764 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427791 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427816 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427873 | orchestrator | 2025-06-22 12:24:00.427881 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 12:24:00.427889 | orchestrator | Sunday 22 June 2025 12:19:42 +0000 (0:00:02.465) 0:04:18.734 *********** 2025-06-22 12:24:00.427897 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:24:00.427905 | orchestrator | 2025-06-22 12:24:00.427913 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-22 12:24:00.427921 | orchestrator | Sunday 22 June 2025 12:19:44 +0000 (0:00:01.256) 0:04:19.990 *********** 2025-06-22 12:24:00.427929 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427947 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427956 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 12:24:00.427994 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 12:24:00.428012 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 12:24:00.428021 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 12:24:00.428029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.428044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.428052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.428061 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.428073 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.428086 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.428094 | orchestrator | 2025-06-22 12:24:00.428102 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-22 12:24:00.428110 | orchestrator | Sunday 22 June 2025 12:19:47 +0000 (0:00:03.474) 0:04:23.465 *********** 2025-06-22 12:24:00.428118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 12:24:00.428131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 12:24:00.428139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.428148 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.428164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 12:24:00.428173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 12:24:00.428181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.428193 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:24:00.428202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 12:24:00.428210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 12:24:00.428218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.428227 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:24:00.428243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 12:24:00.428252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.428265 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.428273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 12:24:00.428281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.428289 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.428297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 12:24:00.428306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.428314 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.428321 | orchestrator | 2025-06-22 12:24:00.428329 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-22 12:24:00.428337 | orchestrator | Sunday 22 June 2025 12:19:49 +0000 (0:00:02.085) 0:04:25.551 *********** 2025-06-22 12:24:00.428356 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 12:24:00.428369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 12:24:00.428378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.428386 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.428394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 12:24:00.428402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 12:24:00.428419 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.428432 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:24:00.428441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 12:24:00.428449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 12:24:00.428457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.428466 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:24:00.428474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 12:24:00.428482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.428490 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.428507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 12:24:00.428520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.428528 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.428536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 12:24:00.428545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.428553 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.428561 | orchestrator | 2025-06-22 12:24:00.428569 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 12:24:00.428576 | orchestrator | Sunday 22 June 2025 12:19:51 +0000 (0:00:02.029) 0:04:27.580 *********** 2025-06-22 12:24:00.428584 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.428592 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.428600 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.428608 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 12:24:00.428616 | orchestrator | 2025-06-22 12:24:00.428623 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-06-22 12:24:00.428631 | orchestrator | Sunday 22 June 2025 12:19:52 +0000 (0:00:00.886) 0:04:28.466 *********** 2025-06-22 12:24:00.428639 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 12:24:00.428647 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 12:24:00.428655 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 12:24:00.428662 | orchestrator | 2025-06-22 12:24:00.428688 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-06-22 12:24:00.428696 | orchestrator | Sunday 22 June 2025 12:19:53 +0000 (0:00:01.139) 0:04:29.605 *********** 2025-06-22 12:24:00.428704 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 12:24:00.428712 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 12:24:00.428720 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 12:24:00.428728 | orchestrator | 2025-06-22 12:24:00.428736 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-06-22 12:24:00.428749 | orchestrator | Sunday 22 June 2025 12:19:54 +0000 (0:00:00.891) 0:04:30.496 *********** 2025-06-22 12:24:00.428757 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:24:00.428765 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:24:00.428773 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:24:00.428781 | orchestrator | 2025-06-22 12:24:00.428789 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-06-22 12:24:00.428797 | orchestrator | Sunday 22 June 2025 12:19:55 +0000 (0:00:00.518) 0:04:31.015 *********** 2025-06-22 12:24:00.428805 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:24:00.428813 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:24:00.428821 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:24:00.428828 | orchestrator | 2025-06-22 12:24:00.428836 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-06-22 12:24:00.428844 | orchestrator | Sunday 22 June 2025 12:19:55 +0000 (0:00:00.520) 0:04:31.535 *********** 2025-06-22 12:24:00.428852 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-22 12:24:00.428865 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-22 12:24:00.428873 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-22 12:24:00.428881 | orchestrator | 2025-06-22 12:24:00.428892 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-06-22 12:24:00.428900 | orchestrator | Sunday 22 June 2025 12:19:57 +0000 (0:00:01.364) 0:04:32.899 *********** 2025-06-22 12:24:00.428908 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-22 12:24:00.428916 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-22 12:24:00.428924 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-22 12:24:00.428932 | orchestrator | 2025-06-22 12:24:00.428940 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-06-22 12:24:00.428947 | orchestrator | Sunday 22 June 2025 12:19:58 +0000 (0:00:01.154) 0:04:34.054 *********** 2025-06-22 12:24:00.428955 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-22 12:24:00.428963 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-22 12:24:00.428971 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-22 12:24:00.428978 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-06-22 12:24:00.428986 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-06-22 12:24:00.428994 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-06-22 12:24:00.429001 | orchestrator | 2025-06-22 12:24:00.429009 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-06-22 12:24:00.429017 | orchestrator | Sunday 22 June 2025 12:20:02 +0000 (0:00:03.739) 0:04:37.794 *********** 2025-06-22 12:24:00.429025 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.429033 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:24:00.429040 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:24:00.429048 | orchestrator | 2025-06-22 12:24:00.429056 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-06-22 12:24:00.429064 | orchestrator | Sunday 22 June 2025 12:20:02 +0000 (0:00:00.303) 0:04:38.097 *********** 2025-06-22 12:24:00.429072 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.429079 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:24:00.429087 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:24:00.429095 | orchestrator | 2025-06-22 12:24:00.429103 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-06-22 12:24:00.429110 | orchestrator | Sunday 22 June 2025 12:20:02 +0000 (0:00:00.478) 0:04:38.575 *********** 2025-06-22 12:24:00.429118 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:24:00.429126 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:24:00.429134 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:24:00.429142 | orchestrator | 2025-06-22 12:24:00.429149 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-06-22 12:24:00.429165 | orchestrator | Sunday 22 June 2025 12:20:04 +0000 (0:00:01.206) 0:04:39.782 *********** 2025-06-22 12:24:00.429173 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-22 12:24:00.429181 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-22 12:24:00.429189 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-22 12:24:00.429197 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-22 12:24:00.429205 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-22 12:24:00.429213 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-22 12:24:00.429221 | orchestrator | 2025-06-22 12:24:00.429229 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-06-22 12:24:00.429236 | orchestrator | Sunday 22 June 2025 12:20:07 +0000 (0:00:03.388) 0:04:43.171 *********** 2025-06-22 12:24:00.429244 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 12:24:00.429252 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 12:24:00.429260 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 12:24:00.429268 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 12:24:00.429275 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:24:00.429283 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 12:24:00.429291 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:24:00.429299 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 12:24:00.429306 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:24:00.429314 | orchestrator | 2025-06-22 12:24:00.429322 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-06-22 12:24:00.429330 | orchestrator | Sunday 22 June 2025 12:20:10 +0000 (0:00:03.433) 0:04:46.604 *********** 2025-06-22 12:24:00.429337 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.429345 | orchestrator | 2025-06-22 12:24:00.429353 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-06-22 12:24:00.429361 | orchestrator | Sunday 22 June 2025 12:20:10 +0000 (0:00:00.128) 0:04:46.732 *********** 2025-06-22 12:24:00.429369 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.429376 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:24:00.429384 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:24:00.429392 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.429400 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.429407 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.429415 | orchestrator | 2025-06-22 12:24:00.429423 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-06-22 12:24:00.429436 | orchestrator | Sunday 22 June 2025 12:20:11 +0000 (0:00:00.766) 0:04:47.499 *********** 2025-06-22 12:24:00.429444 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 12:24:00.429452 | orchestrator | 2025-06-22 12:24:00.429463 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-06-22 12:24:00.429471 | orchestrator | Sunday 22 June 2025 12:20:12 +0000 (0:00:00.762) 0:04:48.261 *********** 2025-06-22 12:24:00.429479 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.429487 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:24:00.429495 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:24:00.429502 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.429510 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.429518 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.429525 | orchestrator | 2025-06-22 12:24:00.429538 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-06-22 12:24:00.429546 | orchestrator | Sunday 22 June 2025 12:20:13 +0000 (0:00:00.576) 0:04:48.837 *********** 2025-06-22 12:24:00.429554 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 12:24:00.429563 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 12:24:00.429571 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 12:24:00.429580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 12:24:00.429597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 12:24:00.429610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 12:24:00.429619 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 12:24:00.429627 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 12:24:00.429635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 12:24:00.429643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.429652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.429721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.429745 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.429759 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.429773 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.429787 | orchestrator | 2025-06-22 12:24:00.429802 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-06-22 12:24:00.429810 | orchestrator | Sunday 22 June 2025 12:20:16 +0000 (0:00:03.755) 0:04:52.593 *********** 2025-06-22 12:24:00.429819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 12:24:00.430120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 12:24:00.430134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 12:24:00.430141 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 12:24:00.430148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 12:24:00.430156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 12:24:00.430168 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvir2025-06-22 12:24:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:24:00.430185 | orchestrator | t', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.430193 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.430201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 12:24:00.430208 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.430215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 12:24:00.430222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 12:24:00.430240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.430248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.430255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.430262 | orchestrator | 2025-06-22 12:24:00.430269 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-06-22 12:24:00.430276 | orchestrator | Sunday 22 June 2025 12:20:22 +0000 (0:00:06.006) 0:04:58.600 *********** 2025-06-22 12:24:00.430283 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.430290 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:24:00.430296 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:24:00.430303 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.430309 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.430316 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.430322 | orchestrator | 2025-06-22 12:24:00.430329 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-06-22 12:24:00.430336 | orchestrator | Sunday 22 June 2025 12:20:24 +0000 (0:00:01.614) 0:05:00.214 *********** 2025-06-22 12:24:00.430342 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-22 12:24:00.430349 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-22 12:24:00.430356 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-22 12:24:00.430362 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-22 12:24:00.430369 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-22 12:24:00.430376 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-22 12:24:00.430382 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-22 12:24:00.430389 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.430395 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-22 12:24:00.430402 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.430409 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-22 12:24:00.430419 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.430426 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-22 12:24:00.430433 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-22 12:24:00.430440 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-22 12:24:00.430446 | orchestrator | 2025-06-22 12:24:00.430453 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-06-22 12:24:00.430460 | orchestrator | Sunday 22 June 2025 12:20:28 +0000 (0:00:03.655) 0:05:03.869 *********** 2025-06-22 12:24:00.430467 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.430473 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:24:00.430480 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:24:00.430487 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.430493 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.430500 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.430507 | orchestrator | 2025-06-22 12:24:00.430513 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-06-22 12:24:00.430520 | orchestrator | Sunday 22 June 2025 12:20:28 +0000 (0:00:00.781) 0:05:04.651 *********** 2025-06-22 12:24:00.430527 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-22 12:24:00.430537 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-22 12:24:00.430544 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-22 12:24:00.430554 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-22 12:24:00.430561 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-22 12:24:00.430568 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-22 12:24:00.430575 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-22 12:24:00.430581 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-22 12:24:00.430588 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-22 12:24:00.430595 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-22 12:24:00.430602 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.430609 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-22 12:24:00.430615 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.430622 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-22 12:24:00.430629 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.430636 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-22 12:24:00.430643 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-22 12:24:00.430649 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-22 12:24:00.430656 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-22 12:24:00.430663 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-22 12:24:00.430695 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-22 12:24:00.430702 | orchestrator | 2025-06-22 12:24:00.430709 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-06-22 12:24:00.430716 | orchestrator | Sunday 22 June 2025 12:20:34 +0000 (0:00:05.528) 0:05:10.179 *********** 2025-06-22 12:24:00.430723 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 12:24:00.430730 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 12:24:00.430736 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 12:24:00.430744 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-22 12:24:00.430752 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 12:24:00.430759 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 12:24:00.430766 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-22 12:24:00.430774 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-22 12:24:00.430781 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 12:24:00.430788 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 12:24:00.430795 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 12:24:00.430803 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 12:24:00.430811 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-22 12:24:00.430818 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.430825 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 12:24:00.430833 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-22 12:24:00.430840 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.430848 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-22 12:24:00.430855 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.430862 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 12:24:00.430870 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 12:24:00.430877 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 12:24:00.430888 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 12:24:00.430896 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 12:24:00.430907 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 12:24:00.430915 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 12:24:00.430922 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 12:24:00.430930 | orchestrator | 2025-06-22 12:24:00.430937 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-06-22 12:24:00.430945 | orchestrator | Sunday 22 June 2025 12:20:41 +0000 (0:00:06.882) 0:05:17.061 *********** 2025-06-22 12:24:00.430953 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.430960 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:24:00.430968 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:24:00.430976 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.430988 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.430996 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.431003 | orchestrator | 2025-06-22 12:24:00.431011 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-06-22 12:24:00.431018 | orchestrator | Sunday 22 June 2025 12:20:41 +0000 (0:00:00.575) 0:05:17.637 *********** 2025-06-22 12:24:00.431026 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.431034 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:24:00.431041 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:24:00.431049 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.431056 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.431064 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.431071 | orchestrator | 2025-06-22 12:24:00.431079 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-06-22 12:24:00.431087 | orchestrator | Sunday 22 June 2025 12:20:42 +0000 (0:00:00.831) 0:05:18.468 *********** 2025-06-22 12:24:00.431095 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.431102 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.431109 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.431115 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:24:00.431122 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:24:00.431129 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:24:00.431135 | orchestrator | 2025-06-22 12:24:00.431142 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-06-22 12:24:00.431149 | orchestrator | Sunday 22 June 2025 12:20:44 +0000 (0:00:02.023) 0:05:20.492 *********** 2025-06-22 12:24:00.431156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 12:24:00.431163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 12:24:00.431171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.431181 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:24:00.431196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 12:24:00.431204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 12:24:00.431211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.431218 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.431225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 12:24:00.431233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 12:24:00.431247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.431259 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:24:00.431266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 12:24:00.431273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.431280 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.431287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 12:24:00.431294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.431301 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.431308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 12:24:00.431327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 12:24:00.431334 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.431341 | orchestrator | 2025-06-22 12:24:00.431348 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-06-22 12:24:00.431355 | orchestrator | Sunday 22 June 2025 12:20:46 +0000 (0:00:01.699) 0:05:22.191 *********** 2025-06-22 12:24:00.431361 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-22 12:24:00.431368 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-22 12:24:00.431375 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.431381 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-22 12:24:00.431388 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-22 12:24:00.431395 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:24:00.431401 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-22 12:24:00.431408 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-22 12:24:00.431415 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:24:00.431421 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-22 12:24:00.431428 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-22 12:24:00.431435 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.431441 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-22 12:24:00.431448 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-22 12:24:00.431455 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.431461 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-22 12:24:00.431468 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-22 12:24:00.431475 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.431481 | orchestrator | 2025-06-22 12:24:00.431488 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-06-22 12:24:00.431495 | orchestrator | Sunday 22 June 2025 12:20:47 +0000 (0:00:00.657) 0:05:22.849 *********** 2025-06-22 12:24:00.431502 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 12:24:00.431509 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 12:24:00.431524 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 12:24:00.431535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 12:24:00.431542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 12:24:00.431549 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 12:24:00.431557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 12:24:00.431564 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 12:24:00.431575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 12:24:00.431589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.431596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.431604 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.431611 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.431622 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.431629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 12:24:00.431636 | orchestrator | 2025-06-22 12:24:00.431643 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 12:24:00.431650 | orchestrator | Sunday 22 June 2025 12:20:50 +0000 (0:00:02.951) 0:05:25.800 *********** 2025-06-22 12:24:00.431659 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.431681 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:24:00.431690 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:24:00.431696 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.431703 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.431713 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.431720 | orchestrator | 2025-06-22 12:24:00.431727 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 12:24:00.431734 | orchestrator | Sunday 22 June 2025 12:20:50 +0000 (0:00:00.572) 0:05:26.372 *********** 2025-06-22 12:24:00.431740 | orchestrator | 2025-06-22 12:24:00.431747 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 12:24:00.431754 | orchestrator | Sunday 22 June 2025 12:20:50 +0000 (0:00:00.348) 0:05:26.720 *********** 2025-06-22 12:24:00.431760 | orchestrator | 2025-06-22 12:24:00.431767 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 12:24:00.431774 | orchestrator | Sunday 22 June 2025 12:20:51 +0000 (0:00:00.130) 0:05:26.851 *********** 2025-06-22 12:24:00.431780 | orchestrator | 2025-06-22 12:24:00.431787 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 12:24:00.431794 | orchestrator | Sunday 22 June 2025 12:20:51 +0000 (0:00:00.128) 0:05:26.979 *********** 2025-06-22 12:24:00.431800 | orchestrator | 2025-06-22 12:24:00.431807 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 12:24:00.431813 | orchestrator | Sunday 22 June 2025 12:20:51 +0000 (0:00:00.130) 0:05:27.110 *********** 2025-06-22 12:24:00.431820 | orchestrator | 2025-06-22 12:24:00.431827 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 12:24:00.431833 | orchestrator | Sunday 22 June 2025 12:20:51 +0000 (0:00:00.130) 0:05:27.241 *********** 2025-06-22 12:24:00.431840 | orchestrator | 2025-06-22 12:24:00.431847 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-06-22 12:24:00.431853 | orchestrator | Sunday 22 June 2025 12:20:51 +0000 (0:00:00.133) 0:05:27.374 *********** 2025-06-22 12:24:00.431860 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:24:00.431867 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:24:00.431874 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:24:00.431880 | orchestrator | 2025-06-22 12:24:00.431896 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-06-22 12:24:00.431903 | orchestrator | Sunday 22 June 2025 12:21:03 +0000 (0:00:12.205) 0:05:39.580 *********** 2025-06-22 12:24:00.431910 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:24:00.431916 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:24:00.431923 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:24:00.431930 | orchestrator | 2025-06-22 12:24:00.431936 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-06-22 12:24:00.431943 | orchestrator | Sunday 22 June 2025 12:21:20 +0000 (0:00:17.196) 0:05:56.777 *********** 2025-06-22 12:24:00.431950 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:24:00.431956 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:24:00.431963 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:24:00.431970 | orchestrator | 2025-06-22 12:24:00.431976 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-06-22 12:24:00.431983 | orchestrator | Sunday 22 June 2025 12:21:42 +0000 (0:00:21.328) 0:06:18.105 *********** 2025-06-22 12:24:00.431990 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:24:00.431996 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:24:00.432003 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:24:00.432009 | orchestrator | 2025-06-22 12:24:00.432016 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-06-22 12:24:00.432023 | orchestrator | Sunday 22 June 2025 12:22:23 +0000 (0:00:41.485) 0:06:59.591 *********** 2025-06-22 12:24:00.432029 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:24:00.432036 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:24:00.432043 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:24:00.432050 | orchestrator | 2025-06-22 12:24:00.432056 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-06-22 12:24:00.432063 | orchestrator | Sunday 22 June 2025 12:22:24 +0000 (0:00:01.026) 0:07:00.617 *********** 2025-06-22 12:24:00.432070 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:24:00.432076 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:24:00.432083 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:24:00.432090 | orchestrator | 2025-06-22 12:24:00.432096 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-06-22 12:24:00.432103 | orchestrator | Sunday 22 June 2025 12:22:25 +0000 (0:00:00.765) 0:07:01.383 *********** 2025-06-22 12:24:00.432109 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:24:00.432116 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:24:00.432123 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:24:00.432129 | orchestrator | 2025-06-22 12:24:00.432136 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-06-22 12:24:00.432143 | orchestrator | Sunday 22 June 2025 12:22:52 +0000 (0:00:26.754) 0:07:28.138 *********** 2025-06-22 12:24:00.432149 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.432156 | orchestrator | 2025-06-22 12:24:00.432163 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-06-22 12:24:00.432169 | orchestrator | Sunday 22 June 2025 12:22:52 +0000 (0:00:00.136) 0:07:28.275 *********** 2025-06-22 12:24:00.432176 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:24:00.432183 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.432189 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.432196 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.432203 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.432209 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-06-22 12:24:00.432216 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-22 12:24:00.432223 | orchestrator | 2025-06-22 12:24:00.432230 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-06-22 12:24:00.432236 | orchestrator | Sunday 22 June 2025 12:23:15 +0000 (0:00:22.804) 0:07:51.079 *********** 2025-06-22 12:24:00.432248 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:24:00.432258 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.432265 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.432272 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:24:00.432279 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.432285 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.432292 | orchestrator | 2025-06-22 12:24:00.432302 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-06-22 12:24:00.432309 | orchestrator | Sunday 22 June 2025 12:23:22 +0000 (0:00:07.667) 0:07:58.747 *********** 2025-06-22 12:24:00.432316 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.432322 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:24:00.432329 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.432335 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.432342 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.432349 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-06-22 12:24:00.432356 | orchestrator | 2025-06-22 12:24:00.432362 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-22 12:24:00.432369 | orchestrator | Sunday 22 June 2025 12:23:26 +0000 (0:00:03.360) 0:08:02.107 *********** 2025-06-22 12:24:00.432375 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-22 12:24:00.432382 | orchestrator | 2025-06-22 12:24:00.432388 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-22 12:24:00.432395 | orchestrator | Sunday 22 June 2025 12:23:39 +0000 (0:00:13.281) 0:08:15.389 *********** 2025-06-22 12:24:00.432402 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-22 12:24:00.432408 | orchestrator | 2025-06-22 12:24:00.432415 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-06-22 12:24:00.432421 | orchestrator | Sunday 22 June 2025 12:23:40 +0000 (0:00:01.323) 0:08:16.712 *********** 2025-06-22 12:24:00.432428 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:24:00.432435 | orchestrator | 2025-06-22 12:24:00.432441 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-06-22 12:24:00.432448 | orchestrator | Sunday 22 June 2025 12:23:42 +0000 (0:00:01.280) 0:08:17.993 *********** 2025-06-22 12:24:00.432454 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-22 12:24:00.432461 | orchestrator | 2025-06-22 12:24:00.432467 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-06-22 12:24:00.432474 | orchestrator | Sunday 22 June 2025 12:23:53 +0000 (0:00:10.913) 0:08:28.906 *********** 2025-06-22 12:24:00.432481 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:24:00.432487 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:24:00.432494 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:24:00.432501 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:24:00.432507 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:24:00.432514 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:24:00.432520 | orchestrator | 2025-06-22 12:24:00.432527 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-06-22 12:24:00.432534 | orchestrator | 2025-06-22 12:24:00.432540 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-06-22 12:24:00.432547 | orchestrator | Sunday 22 June 2025 12:23:54 +0000 (0:00:01.725) 0:08:30.631 *********** 2025-06-22 12:24:00.432554 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:24:00.432560 | orchestrator | changed: [testbed-node-1] 2025-06-22 12:24:00.432567 | orchestrator | changed: [testbed-node-2] 2025-06-22 12:24:00.432574 | orchestrator | 2025-06-22 12:24:00.432580 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-06-22 12:24:00.432587 | orchestrator | 2025-06-22 12:24:00.432593 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-06-22 12:24:00.432600 | orchestrator | Sunday 22 June 2025 12:23:55 +0000 (0:00:01.089) 0:08:31.721 *********** 2025-06-22 12:24:00.432606 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.432617 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.432624 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.432630 | orchestrator | 2025-06-22 12:24:00.432637 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-06-22 12:24:00.432644 | orchestrator | 2025-06-22 12:24:00.432650 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-06-22 12:24:00.432657 | orchestrator | Sunday 22 June 2025 12:23:56 +0000 (0:00:00.500) 0:08:32.222 *********** 2025-06-22 12:24:00.432663 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-06-22 12:24:00.432685 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-22 12:24:00.432692 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-22 12:24:00.432699 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-06-22 12:24:00.432705 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-06-22 12:24:00.432712 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-06-22 12:24:00.432719 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:24:00.432725 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-06-22 12:24:00.432732 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-22 12:24:00.432738 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-22 12:24:00.432745 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-06-22 12:24:00.432751 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-06-22 12:24:00.432758 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-06-22 12:24:00.432765 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:24:00.432771 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-06-22 12:24:00.432778 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-22 12:24:00.432784 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-22 12:24:00.432791 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-06-22 12:24:00.432797 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-06-22 12:24:00.432804 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-06-22 12:24:00.432814 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:24:00.432821 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-06-22 12:24:00.432828 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-22 12:24:00.432838 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-22 12:24:00.432845 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-06-22 12:24:00.432852 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-06-22 12:24:00.432858 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-06-22 12:24:00.432865 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-06-22 12:24:00.432871 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-22 12:24:00.432878 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-22 12:24:00.432885 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-06-22 12:24:00.432891 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-06-22 12:24:00.432898 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-06-22 12:24:00.432905 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.432911 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.432918 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-06-22 12:24:00.432924 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-22 12:24:00.432931 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-22 12:24:00.432938 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-06-22 12:24:00.432949 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-06-22 12:24:00.432955 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-06-22 12:24:00.432962 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.432969 | orchestrator | 2025-06-22 12:24:00.432975 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-06-22 12:24:00.432982 | orchestrator | 2025-06-22 12:24:00.432989 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-06-22 12:24:00.432995 | orchestrator | Sunday 22 June 2025 12:23:57 +0000 (0:00:01.268) 0:08:33.490 *********** 2025-06-22 12:24:00.433002 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-06-22 12:24:00.433008 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-06-22 12:24:00.433015 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.433022 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-06-22 12:24:00.433028 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-06-22 12:24:00.433035 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.433041 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-06-22 12:24:00.433048 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-06-22 12:24:00.433055 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.433061 | orchestrator | 2025-06-22 12:24:00.433068 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-06-22 12:24:00.433075 | orchestrator | 2025-06-22 12:24:00.433081 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-06-22 12:24:00.433091 | orchestrator | Sunday 22 June 2025 12:23:58 +0000 (0:00:00.744) 0:08:34.235 *********** 2025-06-22 12:24:00.433098 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.433105 | orchestrator | 2025-06-22 12:24:00.433111 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-06-22 12:24:00.433118 | orchestrator | 2025-06-22 12:24:00.433125 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-06-22 12:24:00.433131 | orchestrator | Sunday 22 June 2025 12:23:59 +0000 (0:00:00.642) 0:08:34.877 *********** 2025-06-22 12:24:00.433138 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:24:00.433145 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:24:00.433151 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:24:00.433158 | orchestrator | 2025-06-22 12:24:00.433165 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:24:00.433171 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:24:00.433178 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-06-22 12:24:00.433185 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-22 12:24:00.433192 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-22 12:24:00.433199 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-22 12:24:00.433206 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-06-22 12:24:00.433212 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-22 12:24:00.433219 | orchestrator | 2025-06-22 12:24:00.433226 | orchestrator | 2025-06-22 12:24:00.433232 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:24:00.433239 | orchestrator | Sunday 22 June 2025 12:23:59 +0000 (0:00:00.433) 0:08:35.311 *********** 2025-06-22 12:24:00.433253 | orchestrator | =============================================================================== 2025-06-22 12:24:00.433260 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 41.49s 2025-06-22 12:24:00.433270 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.34s 2025-06-22 12:24:00.433277 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 26.75s 2025-06-22 12:24:00.433284 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 23.76s 2025-06-22 12:24:00.433291 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.80s 2025-06-22 12:24:00.433298 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.33s 2025-06-22 12:24:00.433304 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.54s 2025-06-22 12:24:00.433311 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.55s 2025-06-22 12:24:00.433318 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 17.20s 2025-06-22 12:24:00.433324 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.55s 2025-06-22 12:24:00.433331 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.28s 2025-06-22 12:24:00.433338 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.42s 2025-06-22 12:24:00.433344 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.21s 2025-06-22 12:24:00.433351 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.05s 2025-06-22 12:24:00.433358 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.94s 2025-06-22 12:24:00.433364 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.91s 2025-06-22 12:24:00.433371 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.87s 2025-06-22 12:24:00.433378 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.89s 2025-06-22 12:24:00.433384 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.78s 2025-06-22 12:24:00.433391 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 7.67s 2025-06-22 12:24:03.460303 | orchestrator | 2025-06-22 12:24:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:24:06.503102 | orchestrator | 2025-06-22 12:24:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:24:09.544039 | orchestrator | 2025-06-22 12:24:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:24:12.584820 | orchestrator | 2025-06-22 12:24:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:24:15.627927 | orchestrator | 2025-06-22 12:24:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:24:18.670953 | orchestrator | 2025-06-22 12:24:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:24:21.718650 | orchestrator | 2025-06-22 12:24:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:24:24.756573 | orchestrator | 2025-06-22 12:24:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:24:27.800613 | orchestrator | 2025-06-22 12:24:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:24:30.841437 | orchestrator | 2025-06-22 12:24:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:24:33.883712 | orchestrator | 2025-06-22 12:24:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:24:36.922587 | orchestrator | 2025-06-22 12:24:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:24:39.967828 | orchestrator | 2025-06-22 12:24:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:24:43.015226 | orchestrator | 2025-06-22 12:24:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:24:46.062935 | orchestrator | 2025-06-22 12:24:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:24:49.106284 | orchestrator | 2025-06-22 12:24:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:24:52.153032 | orchestrator | 2025-06-22 12:24:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:24:55.196201 | orchestrator | 2025-06-22 12:24:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:24:58.237485 | orchestrator | 2025-06-22 12:24:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 12:25:01.283438 | orchestrator | 2025-06-22 12:25:01.582120 | orchestrator | 2025-06-22 12:25:01.587070 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sun Jun 22 12:25:01 UTC 2025 2025-06-22 12:25:01.587118 | orchestrator | 2025-06-22 12:25:01.949903 | orchestrator | ok: Runtime: 0:36:20.746073 2025-06-22 12:25:02.220964 | 2025-06-22 12:25:02.221142 | TASK [Bootstrap services] 2025-06-22 12:25:02.990849 | orchestrator | 2025-06-22 12:25:02.991031 | orchestrator | # BOOTSTRAP 2025-06-22 12:25:02.991055 | orchestrator | 2025-06-22 12:25:02.991071 | orchestrator | + set -e 2025-06-22 12:25:02.991084 | orchestrator | + echo 2025-06-22 12:25:02.991098 | orchestrator | + echo '# BOOTSTRAP' 2025-06-22 12:25:02.991117 | orchestrator | + echo 2025-06-22 12:25:02.991164 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-06-22 12:25:02.999931 | orchestrator | + set -e 2025-06-22 12:25:02.999974 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-06-22 12:25:06.944704 | orchestrator | 2025-06-22 12:25:06 | INFO  | It takes a moment until task a6f4ee2c-ac6e-4e0c-8e22-72b4cb318d92 (flavor-manager) has been started and output is visible here. 2025-06-22 12:25:10.362097 | orchestrator | 2025-06-22 12:25:10 | INFO  | Flavor SCS-1V-4 created 2025-06-22 12:25:10.670311 | orchestrator | 2025-06-22 12:25:10 | INFO  | Flavor SCS-2V-8 created 2025-06-22 12:25:11.066358 | orchestrator | 2025-06-22 12:25:11 | INFO  | Flavor SCS-4V-16 created 2025-06-22 12:25:11.223979 | orchestrator | 2025-06-22 12:25:11 | INFO  | Flavor SCS-8V-32 created 2025-06-22 12:25:11.369187 | orchestrator | 2025-06-22 12:25:11 | INFO  | Flavor SCS-1V-2 created 2025-06-22 12:25:11.525263 | orchestrator | 2025-06-22 12:25:11 | INFO  | Flavor SCS-2V-4 created 2025-06-22 12:25:11.653695 | orchestrator | 2025-06-22 12:25:11 | INFO  | Flavor SCS-4V-8 created 2025-06-22 12:25:11.812856 | orchestrator | 2025-06-22 12:25:11 | INFO  | Flavor SCS-8V-16 created 2025-06-22 12:25:11.958184 | orchestrator | 2025-06-22 12:25:11 | INFO  | Flavor SCS-16V-32 created 2025-06-22 12:25:12.101897 | orchestrator | 2025-06-22 12:25:12 | INFO  | Flavor SCS-1V-8 created 2025-06-22 12:25:12.209264 | orchestrator | 2025-06-22 12:25:12 | INFO  | Flavor SCS-2V-16 created 2025-06-22 12:25:12.334452 | orchestrator | 2025-06-22 12:25:12 | INFO  | Flavor SCS-4V-32 created 2025-06-22 12:25:12.467379 | orchestrator | 2025-06-22 12:25:12 | INFO  | Flavor SCS-1L-1 created 2025-06-22 12:25:12.587800 | orchestrator | 2025-06-22 12:25:12 | INFO  | Flavor SCS-2V-4-20s created 2025-06-22 12:25:12.740588 | orchestrator | 2025-06-22 12:25:12 | INFO  | Flavor SCS-4V-16-100s created 2025-06-22 12:25:12.895490 | orchestrator | 2025-06-22 12:25:12 | INFO  | Flavor SCS-1V-4-10 created 2025-06-22 12:25:13.025184 | orchestrator | 2025-06-22 12:25:13 | INFO  | Flavor SCS-2V-8-20 created 2025-06-22 12:25:13.175389 | orchestrator | 2025-06-22 12:25:13 | INFO  | Flavor SCS-4V-16-50 created 2025-06-22 12:25:13.343573 | orchestrator | 2025-06-22 12:25:13 | INFO  | Flavor SCS-8V-32-100 created 2025-06-22 12:25:13.486997 | orchestrator | 2025-06-22 12:25:13 | INFO  | Flavor SCS-1V-2-5 created 2025-06-22 12:25:13.631523 | orchestrator | 2025-06-22 12:25:13 | INFO  | Flavor SCS-2V-4-10 created 2025-06-22 12:25:13.773706 | orchestrator | 2025-06-22 12:25:13 | INFO  | Flavor SCS-4V-8-20 created 2025-06-22 12:25:13.899972 | orchestrator | 2025-06-22 12:25:13 | INFO  | Flavor SCS-8V-16-50 created 2025-06-22 12:25:14.051803 | orchestrator | 2025-06-22 12:25:14 | INFO  | Flavor SCS-16V-32-100 created 2025-06-22 12:25:14.193860 | orchestrator | 2025-06-22 12:25:14 | INFO  | Flavor SCS-1V-8-20 created 2025-06-22 12:25:14.340096 | orchestrator | 2025-06-22 12:25:14 | INFO  | Flavor SCS-2V-16-50 created 2025-06-22 12:25:14.494712 | orchestrator | 2025-06-22 12:25:14 | INFO  | Flavor SCS-4V-32-100 created 2025-06-22 12:25:14.625335 | orchestrator | 2025-06-22 12:25:14 | INFO  | Flavor SCS-1L-1-5 created 2025-06-22 12:25:16.893516 | orchestrator | 2025-06-22 12:25:16 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-06-22 12:25:16.898254 | orchestrator | Registering Redlock._acquired_script 2025-06-22 12:25:16.898312 | orchestrator | Registering Redlock._extend_script 2025-06-22 12:25:16.898356 | orchestrator | Registering Redlock._release_script 2025-06-22 12:25:16.955698 | orchestrator | 2025-06-22 12:25:16 | INFO  | Task 5a69bad9-0f6a-4e1b-9899-dfdc6f40d559 (bootstrap-basic) was prepared for execution. 2025-06-22 12:25:16.955775 | orchestrator | 2025-06-22 12:25:16 | INFO  | It takes a moment until task 5a69bad9-0f6a-4e1b-9899-dfdc6f40d559 (bootstrap-basic) has been started and output is visible here. 2025-06-22 12:25:21.048224 | orchestrator | 2025-06-22 12:25:21.049386 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-06-22 12:25:21.051183 | orchestrator | 2025-06-22 12:25:21.052605 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 12:25:21.054566 | orchestrator | Sunday 22 June 2025 12:25:21 +0000 (0:00:00.077) 0:00:00.077 *********** 2025-06-22 12:25:22.903770 | orchestrator | ok: [localhost] 2025-06-22 12:25:22.904305 | orchestrator | 2025-06-22 12:25:22.905957 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-06-22 12:25:22.906002 | orchestrator | Sunday 22 June 2025 12:25:22 +0000 (0:00:01.858) 0:00:01.936 *********** 2025-06-22 12:25:30.744156 | orchestrator | ok: [localhost] 2025-06-22 12:25:30.744521 | orchestrator | 2025-06-22 12:25:30.745195 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-06-22 12:25:30.745857 | orchestrator | Sunday 22 June 2025 12:25:30 +0000 (0:00:07.840) 0:00:09.776 *********** 2025-06-22 12:25:38.136987 | orchestrator | changed: [localhost] 2025-06-22 12:25:38.137743 | orchestrator | 2025-06-22 12:25:38.138196 | orchestrator | TASK [Get volume type local] *************************************************** 2025-06-22 12:25:38.138584 | orchestrator | Sunday 22 June 2025 12:25:38 +0000 (0:00:07.392) 0:00:17.168 *********** 2025-06-22 12:25:45.242270 | orchestrator | ok: [localhost] 2025-06-22 12:25:45.243197 | orchestrator | 2025-06-22 12:25:45.244271 | orchestrator | TASK [Create volume type local] ************************************************ 2025-06-22 12:25:45.245909 | orchestrator | Sunday 22 June 2025 12:25:45 +0000 (0:00:07.103) 0:00:24.272 *********** 2025-06-22 12:25:51.844434 | orchestrator | changed: [localhost] 2025-06-22 12:25:51.844680 | orchestrator | 2025-06-22 12:25:51.845980 | orchestrator | TASK [Create public network] *************************************************** 2025-06-22 12:25:51.850948 | orchestrator | Sunday 22 June 2025 12:25:51 +0000 (0:00:06.601) 0:00:30.873 *********** 2025-06-22 12:25:59.121744 | orchestrator | changed: [localhost] 2025-06-22 12:25:59.122199 | orchestrator | 2025-06-22 12:25:59.125178 | orchestrator | TASK [Set public network to default] ******************************************* 2025-06-22 12:25:59.125204 | orchestrator | Sunday 22 June 2025 12:25:59 +0000 (0:00:07.278) 0:00:38.152 *********** 2025-06-22 12:26:05.422263 | orchestrator | changed: [localhost] 2025-06-22 12:26:05.422513 | orchestrator | 2025-06-22 12:26:05.424650 | orchestrator | TASK [Create public subnet] **************************************************** 2025-06-22 12:26:05.424700 | orchestrator | Sunday 22 June 2025 12:26:05 +0000 (0:00:06.298) 0:00:44.451 *********** 2025-06-22 12:26:10.017472 | orchestrator | changed: [localhost] 2025-06-22 12:26:10.018113 | orchestrator | 2025-06-22 12:26:10.018925 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-06-22 12:26:10.019638 | orchestrator | Sunday 22 June 2025 12:26:10 +0000 (0:00:04.596) 0:00:49.047 *********** 2025-06-22 12:26:14.634510 | orchestrator | changed: [localhost] 2025-06-22 12:26:14.636843 | orchestrator | 2025-06-22 12:26:14.638793 | orchestrator | TASK [Create manager role] ***************************************************** 2025-06-22 12:26:14.639463 | orchestrator | Sunday 22 June 2025 12:26:14 +0000 (0:00:04.616) 0:00:53.664 *********** 2025-06-22 12:26:18.173918 | orchestrator | ok: [localhost] 2025-06-22 12:26:18.174078 | orchestrator | 2025-06-22 12:26:18.174096 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:26:18.174359 | orchestrator | 2025-06-22 12:26:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 12:26:18.174696 | orchestrator | 2025-06-22 12:26:18 | INFO  | Please wait and do not abort execution. 2025-06-22 12:26:18.176526 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:26:18.177900 | orchestrator | 2025-06-22 12:26:18.178012 | orchestrator | 2025-06-22 12:26:18.179061 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:26:18.179768 | orchestrator | Sunday 22 June 2025 12:26:18 +0000 (0:00:03.539) 0:00:57.204 *********** 2025-06-22 12:26:18.180470 | orchestrator | =============================================================================== 2025-06-22 12:26:18.181240 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.84s 2025-06-22 12:26:18.181694 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.39s 2025-06-22 12:26:18.182342 | orchestrator | Create public network --------------------------------------------------- 7.28s 2025-06-22 12:26:18.182891 | orchestrator | Get volume type local --------------------------------------------------- 7.10s 2025-06-22 12:26:18.183163 | orchestrator | Create volume type local ------------------------------------------------ 6.60s 2025-06-22 12:26:18.183755 | orchestrator | Set public network to default ------------------------------------------- 6.30s 2025-06-22 12:26:18.184227 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.62s 2025-06-22 12:26:18.184524 | orchestrator | Create public subnet ---------------------------------------------------- 4.60s 2025-06-22 12:26:18.185096 | orchestrator | Create manager role ----------------------------------------------------- 3.54s 2025-06-22 12:26:18.185402 | orchestrator | Gathering Facts --------------------------------------------------------- 1.86s 2025-06-22 12:26:20.619031 | orchestrator | 2025-06-22 12:26:20 | INFO  | It takes a moment until task 963a70a2-f4ca-4b99-9ed6-74cd06007817 (image-manager) has been started and output is visible here. 2025-06-22 12:26:24.170126 | orchestrator | 2025-06-22 12:26:24 | INFO  | Processing image 'Cirros 0.6.2' 2025-06-22 12:26:24.378804 | orchestrator | 2025-06-22 12:26:24 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-06-22 12:26:24.379814 | orchestrator | 2025-06-22 12:26:24 | INFO  | Importing image Cirros 0.6.2 2025-06-22 12:26:24.382497 | orchestrator | 2025-06-22 12:26:24 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-22 12:26:26.106920 | orchestrator | 2025-06-22 12:26:26 | INFO  | Waiting for image to leave queued state... 2025-06-22 12:26:28.148217 | orchestrator | 2025-06-22 12:26:28 | INFO  | Waiting for import to complete... 2025-06-22 12:26:38.279396 | orchestrator | 2025-06-22 12:26:38 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-06-22 12:26:38.468874 | orchestrator | 2025-06-22 12:26:38 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-06-22 12:26:38.470078 | orchestrator | 2025-06-22 12:26:38 | INFO  | Setting internal_version = 0.6.2 2025-06-22 12:26:38.472038 | orchestrator | 2025-06-22 12:26:38 | INFO  | Setting image_original_user = cirros 2025-06-22 12:26:38.472982 | orchestrator | 2025-06-22 12:26:38 | INFO  | Adding tag os:cirros 2025-06-22 12:26:38.756879 | orchestrator | 2025-06-22 12:26:38 | INFO  | Setting property architecture: x86_64 2025-06-22 12:26:38.975477 | orchestrator | 2025-06-22 12:26:38 | INFO  | Setting property hw_disk_bus: scsi 2025-06-22 12:26:39.182768 | orchestrator | 2025-06-22 12:26:39 | INFO  | Setting property hw_rng_model: virtio 2025-06-22 12:26:39.436194 | orchestrator | 2025-06-22 12:26:39 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-22 12:26:39.646389 | orchestrator | 2025-06-22 12:26:39 | INFO  | Setting property hw_watchdog_action: reset 2025-06-22 12:26:39.866086 | orchestrator | 2025-06-22 12:26:39 | INFO  | Setting property hypervisor_type: qemu 2025-06-22 12:26:40.060381 | orchestrator | 2025-06-22 12:26:40 | INFO  | Setting property os_distro: cirros 2025-06-22 12:26:40.255340 | orchestrator | 2025-06-22 12:26:40 | INFO  | Setting property replace_frequency: never 2025-06-22 12:26:40.455951 | orchestrator | 2025-06-22 12:26:40 | INFO  | Setting property uuid_validity: none 2025-06-22 12:26:40.620048 | orchestrator | 2025-06-22 12:26:40 | INFO  | Setting property provided_until: none 2025-06-22 12:26:40.845499 | orchestrator | 2025-06-22 12:26:40 | INFO  | Setting property image_description: Cirros 2025-06-22 12:26:41.053186 | orchestrator | 2025-06-22 12:26:41 | INFO  | Setting property image_name: Cirros 2025-06-22 12:26:41.243340 | orchestrator | 2025-06-22 12:26:41 | INFO  | Setting property internal_version: 0.6.2 2025-06-22 12:26:41.456820 | orchestrator | 2025-06-22 12:26:41 | INFO  | Setting property image_original_user: cirros 2025-06-22 12:26:41.662691 | orchestrator | 2025-06-22 12:26:41 | INFO  | Setting property os_version: 0.6.2 2025-06-22 12:26:41.896360 | orchestrator | 2025-06-22 12:26:41 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-22 12:26:42.106953 | orchestrator | 2025-06-22 12:26:42 | INFO  | Setting property image_build_date: 2023-05-30 2025-06-22 12:26:42.315815 | orchestrator | 2025-06-22 12:26:42 | INFO  | Checking status of 'Cirros 0.6.2' 2025-06-22 12:26:42.316010 | orchestrator | 2025-06-22 12:26:42 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-06-22 12:26:42.317314 | orchestrator | 2025-06-22 12:26:42 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-06-22 12:26:42.526541 | orchestrator | 2025-06-22 12:26:42 | INFO  | Processing image 'Cirros 0.6.3' 2025-06-22 12:26:42.757775 | orchestrator | 2025-06-22 12:26:42 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-06-22 12:26:42.758197 | orchestrator | 2025-06-22 12:26:42 | INFO  | Importing image Cirros 0.6.3 2025-06-22 12:26:42.758496 | orchestrator | 2025-06-22 12:26:42 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-22 12:26:43.082446 | orchestrator | 2025-06-22 12:26:43 | INFO  | Waiting for image to leave queued state... 2025-06-22 12:26:45.118185 | orchestrator | 2025-06-22 12:26:45 | INFO  | Waiting for import to complete... 2025-06-22 12:26:55.258360 | orchestrator | 2025-06-22 12:26:55 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-06-22 12:26:55.759576 | orchestrator | 2025-06-22 12:26:55 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-06-22 12:26:55.759785 | orchestrator | 2025-06-22 12:26:55 | INFO  | Setting internal_version = 0.6.3 2025-06-22 12:26:55.760954 | orchestrator | 2025-06-22 12:26:55 | INFO  | Setting image_original_user = cirros 2025-06-22 12:26:55.761919 | orchestrator | 2025-06-22 12:26:55 | INFO  | Adding tag os:cirros 2025-06-22 12:26:55.971306 | orchestrator | 2025-06-22 12:26:55 | INFO  | Setting property architecture: x86_64 2025-06-22 12:26:56.310917 | orchestrator | 2025-06-22 12:26:56 | INFO  | Setting property hw_disk_bus: scsi 2025-06-22 12:26:56.517009 | orchestrator | 2025-06-22 12:26:56 | INFO  | Setting property hw_rng_model: virtio 2025-06-22 12:26:56.707226 | orchestrator | 2025-06-22 12:26:56 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-22 12:26:56.942272 | orchestrator | 2025-06-22 12:26:56 | INFO  | Setting property hw_watchdog_action: reset 2025-06-22 12:26:57.134220 | orchestrator | 2025-06-22 12:26:57 | INFO  | Setting property hypervisor_type: qemu 2025-06-22 12:26:57.344666 | orchestrator | 2025-06-22 12:26:57 | INFO  | Setting property os_distro: cirros 2025-06-22 12:26:57.561981 | orchestrator | 2025-06-22 12:26:57 | INFO  | Setting property replace_frequency: never 2025-06-22 12:26:57.777711 | orchestrator | 2025-06-22 12:26:57 | INFO  | Setting property uuid_validity: none 2025-06-22 12:26:57.985995 | orchestrator | 2025-06-22 12:26:57 | INFO  | Setting property provided_until: none 2025-06-22 12:26:58.238657 | orchestrator | 2025-06-22 12:26:58 | INFO  | Setting property image_description: Cirros 2025-06-22 12:26:58.458878 | orchestrator | 2025-06-22 12:26:58 | INFO  | Setting property image_name: Cirros 2025-06-22 12:26:58.651835 | orchestrator | 2025-06-22 12:26:58 | INFO  | Setting property internal_version: 0.6.3 2025-06-22 12:26:58.881421 | orchestrator | 2025-06-22 12:26:58 | INFO  | Setting property image_original_user: cirros 2025-06-22 12:26:59.091093 | orchestrator | 2025-06-22 12:26:59 | INFO  | Setting property os_version: 0.6.3 2025-06-22 12:26:59.315807 | orchestrator | 2025-06-22 12:26:59 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-22 12:26:59.579939 | orchestrator | 2025-06-22 12:26:59 | INFO  | Setting property image_build_date: 2024-09-26 2025-06-22 12:26:59.830172 | orchestrator | 2025-06-22 12:26:59 | INFO  | Checking status of 'Cirros 0.6.3' 2025-06-22 12:26:59.830762 | orchestrator | 2025-06-22 12:26:59 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-06-22 12:26:59.831971 | orchestrator | 2025-06-22 12:26:59 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-06-22 12:27:00.824406 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-06-22 12:27:02.733000 | orchestrator | 2025-06-22 12:27:02 | INFO  | date: 2025-06-22 2025-06-22 12:27:02.733107 | orchestrator | 2025-06-22 12:27:02 | INFO  | image: octavia-amphora-haproxy-2024.2.20250622.qcow2 2025-06-22 12:27:02.733125 | orchestrator | 2025-06-22 12:27:02 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2 2025-06-22 12:27:02.733159 | orchestrator | 2025-06-22 12:27:02 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2.CHECKSUM 2025-06-22 12:27:02.753276 | orchestrator | 2025-06-22 12:27:02 | INFO  | checksum: 77df9fefb5aab55dc760a767e58162a9735f5740229c1da42280293548a761a7 2025-06-22 12:27:02.821026 | orchestrator | 2025-06-22 12:27:02 | INFO  | It takes a moment until task 144b0f47-4393-44a9-808c-5dc9cbc9ed8c (image-manager) has been started and output is visible here. 2025-06-22 12:27:03.046471 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-06-22 12:27:03.048309 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-06-22 12:27:04.690277 | orchestrator | 2025-06-22 12:27:04 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-06-22' 2025-06-22 12:27:04.713404 | orchestrator | 2025-06-22 12:27:04 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2: 200 2025-06-22 12:27:04.714402 | orchestrator | 2025-06-22 12:27:04 | INFO  | Importing image OpenStack Octavia Amphora 2025-06-22 2025-06-22 12:27:04.714854 | orchestrator | 2025-06-22 12:27:04 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2 2025-06-22 12:27:06.097181 | orchestrator | 2025-06-22 12:27:06 | INFO  | Waiting for image to leave queued state... 2025-06-22 12:27:08.139630 | orchestrator | 2025-06-22 12:27:08 | INFO  | Waiting for import to complete... 2025-06-22 12:27:18.430227 | orchestrator | 2025-06-22 12:27:18 | INFO  | Waiting for import to complete... 2025-06-22 12:27:28.527148 | orchestrator | 2025-06-22 12:27:28 | INFO  | Waiting for import to complete... 2025-06-22 12:27:38.615251 | orchestrator | 2025-06-22 12:27:38 | INFO  | Waiting for import to complete... 2025-06-22 12:27:48.709814 | orchestrator | 2025-06-22 12:27:48 | INFO  | Waiting for import to complete... 2025-06-22 12:27:58.999382 | orchestrator | 2025-06-22 12:27:58 | INFO  | Import of 'OpenStack Octavia Amphora 2025-06-22' successfully completed, reloading images 2025-06-22 12:27:59.313061 | orchestrator | 2025-06-22 12:27:59 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-06-22' 2025-06-22 12:27:59.313172 | orchestrator | 2025-06-22 12:27:59 | INFO  | Setting internal_version = 2025-06-22 2025-06-22 12:27:59.313462 | orchestrator | 2025-06-22 12:27:59 | INFO  | Setting image_original_user = ubuntu 2025-06-22 12:27:59.315150 | orchestrator | 2025-06-22 12:27:59 | INFO  | Adding tag amphora 2025-06-22 12:27:59.564200 | orchestrator | 2025-06-22 12:27:59 | INFO  | Adding tag os:ubuntu 2025-06-22 12:27:59.789692 | orchestrator | 2025-06-22 12:27:59 | INFO  | Setting property architecture: x86_64 2025-06-22 12:28:00.021456 | orchestrator | 2025-06-22 12:28:00 | INFO  | Setting property hw_disk_bus: scsi 2025-06-22 12:28:00.215682 | orchestrator | 2025-06-22 12:28:00 | INFO  | Setting property hw_rng_model: virtio 2025-06-22 12:28:00.400411 | orchestrator | 2025-06-22 12:28:00 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-22 12:28:00.607854 | orchestrator | 2025-06-22 12:28:00 | INFO  | Setting property hw_watchdog_action: reset 2025-06-22 12:28:00.831359 | orchestrator | 2025-06-22 12:28:00 | INFO  | Setting property hypervisor_type: qemu 2025-06-22 12:28:01.033912 | orchestrator | 2025-06-22 12:28:01 | INFO  | Setting property os_distro: ubuntu 2025-06-22 12:28:01.258225 | orchestrator | 2025-06-22 12:28:01 | INFO  | Setting property replace_frequency: quarterly 2025-06-22 12:28:01.452850 | orchestrator | 2025-06-22 12:28:01 | INFO  | Setting property uuid_validity: last-1 2025-06-22 12:28:01.669775 | orchestrator | 2025-06-22 12:28:01 | INFO  | Setting property provided_until: none 2025-06-22 12:28:01.977179 | orchestrator | 2025-06-22 12:28:01 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-06-22 12:28:02.183168 | orchestrator | 2025-06-22 12:28:02 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-06-22 12:28:02.364782 | orchestrator | 2025-06-22 12:28:02 | INFO  | Setting property internal_version: 2025-06-22 2025-06-22 12:28:02.602800 | orchestrator | 2025-06-22 12:28:02 | INFO  | Setting property image_original_user: ubuntu 2025-06-22 12:28:02.815190 | orchestrator | 2025-06-22 12:28:02 | INFO  | Setting property os_version: 2025-06-22 2025-06-22 12:28:03.031726 | orchestrator | 2025-06-22 12:28:03 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2 2025-06-22 12:28:03.225022 | orchestrator | 2025-06-22 12:28:03 | INFO  | Setting property image_build_date: 2025-06-22 2025-06-22 12:28:03.465522 | orchestrator | 2025-06-22 12:28:03 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-06-22' 2025-06-22 12:28:03.466402 | orchestrator | 2025-06-22 12:28:03 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-06-22' 2025-06-22 12:28:03.661411 | orchestrator | 2025-06-22 12:28:03 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-06-22 12:28:03.661666 | orchestrator | 2025-06-22 12:28:03 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-06-22 12:28:03.662640 | orchestrator | 2025-06-22 12:28:03 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-06-22 12:28:03.663452 | orchestrator | 2025-06-22 12:28:03 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-06-22 12:28:04.390508 | orchestrator | ok: Runtime: 0:03:01.503801 2025-06-22 12:28:04.455310 | 2025-06-22 12:28:04.455460 | TASK [Run checks] 2025-06-22 12:28:05.132623 | orchestrator | + set -e 2025-06-22 12:28:05.132746 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 12:28:05.132758 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 12:28:05.132768 | orchestrator | ++ INTERACTIVE=false 2025-06-22 12:28:05.132774 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 12:28:05.132779 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 12:28:05.132785 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-22 12:28:05.133736 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-22 12:28:05.139903 | orchestrator | 2025-06-22 12:28:05.139967 | orchestrator | # CHECK 2025-06-22 12:28:05.139982 | orchestrator | 2025-06-22 12:28:05.139995 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 12:28:05.140020 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 12:28:05.140086 | orchestrator | + echo 2025-06-22 12:28:05.140098 | orchestrator | + echo '# CHECK' 2025-06-22 12:28:05.140109 | orchestrator | + echo 2025-06-22 12:28:05.140125 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-22 12:28:05.140146 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-22 12:28:05.193149 | orchestrator | 2025-06-22 12:28:05.193241 | orchestrator | ## Containers @ testbed-manager 2025-06-22 12:28:05.193256 | orchestrator | 2025-06-22 12:28:05.193271 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-22 12:28:05.193282 | orchestrator | + echo 2025-06-22 12:28:05.193294 | orchestrator | + echo '## Containers @ testbed-manager' 2025-06-22 12:28:05.193306 | orchestrator | + echo 2025-06-22 12:28:05.193318 | orchestrator | + osism container testbed-manager ps 2025-06-22 12:28:07.309525 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-22 12:28:07.309693 | orchestrator | b9e644058fbf registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_blackbox_exporter 2025-06-22 12:28:07.309719 | orchestrator | 0831e8662fe3 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_alertmanager 2025-06-22 12:28:07.309741 | orchestrator | 4c10c6dd7ff2 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-06-22 12:28:07.309753 | orchestrator | 7fa1f2769cfe registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-06-22 12:28:07.309764 | orchestrator | 870d66527443 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_server 2025-06-22 12:28:07.309776 | orchestrator | 9de2e67c066d registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 18 minutes ago Up 18 minutes cephclient 2025-06-22 12:28:07.309792 | orchestrator | 0053b75f3b6a registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-06-22 12:28:07.309805 | orchestrator | cb32c478ca69 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-06-22 12:28:07.309816 | orchestrator | 647968c34ca8 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-06-22 12:28:07.309852 | orchestrator | 3042388ad2fa phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 32 minutes ago Up 32 minutes (healthy) 80/tcp phpmyadmin 2025-06-22 12:28:07.309864 | orchestrator | 1624256e8755 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 33 minutes ago Up 32 minutes openstackclient 2025-06-22 12:28:07.309876 | orchestrator | 70e5f597950a registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 33 minutes ago Up 33 minutes (healthy) 8080/tcp homer 2025-06-22 12:28:07.309887 | orchestrator | b4b1264d7212 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 54 minutes ago Up 53 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-06-22 12:28:07.309904 | orchestrator | 206e2982ce34 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" 57 minutes ago Up 39 minutes (healthy) manager-inventory_reconciler-1 2025-06-22 12:28:07.309936 | orchestrator | 20ae315acd2b registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" 57 minutes ago Up 40 minutes (healthy) ceph-ansible 2025-06-22 12:28:07.309948 | orchestrator | c3d1c28b6e33 registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" 57 minutes ago Up 40 minutes (healthy) osism-kubernetes 2025-06-22 12:28:07.309960 | orchestrator | 62a856d90703 registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" 57 minutes ago Up 40 minutes (healthy) kolla-ansible 2025-06-22 12:28:07.309971 | orchestrator | 7c080e14444e registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" 57 minutes ago Up 40 minutes (healthy) osism-ansible 2025-06-22 12:28:07.309982 | orchestrator | aa1a39eea8a2 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 57 minutes ago Up 40 minutes (healthy) 8000/tcp manager-ara-server-1 2025-06-22 12:28:07.309993 | orchestrator | e0e0ea928512 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) manager-openstack-1 2025-06-22 12:28:07.310004 | orchestrator | 51b06d7900da registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 57 minutes ago Up 40 minutes (healthy) 6379/tcp manager-redis-1 2025-06-22 12:28:07.310051 | orchestrator | c58ad70fb3ce registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-06-22 12:28:07.310066 | orchestrator | 8a5da99eef6d registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" 57 minutes ago Up 40 minutes (healthy) osismclient 2025-06-22 12:28:07.310086 | orchestrator | fba13ca61b13 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) manager-flower-1 2025-06-22 12:28:07.310097 | orchestrator | f9ff29b956f7 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 57 minutes ago Up 40 minutes (healthy) 3306/tcp manager-mariadb-1 2025-06-22 12:28:07.310108 | orchestrator | eef2b53960c5 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) manager-listener-1 2025-06-22 12:28:07.310120 | orchestrator | 4978a2b90950 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) manager-beat-1 2025-06-22 12:28:07.310131 | orchestrator | fe763f325372 registry.osism.tech/dockerhub/library/traefik:v3.4.1 "/entrypoint.sh trae…" 59 minutes ago Up 59 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-06-22 12:28:07.557293 | orchestrator | 2025-06-22 12:28:07.557402 | orchestrator | ## Images @ testbed-manager 2025-06-22 12:28:07.557418 | orchestrator | 2025-06-22 12:28:07.557430 | orchestrator | + echo 2025-06-22 12:28:07.557442 | orchestrator | + echo '## Images @ testbed-manager' 2025-06-22 12:28:07.557454 | orchestrator | + echo 2025-06-22 12:28:07.557465 | orchestrator | + osism container testbed-manager images 2025-06-22 12:28:09.593983 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-22 12:28:09.594127 | orchestrator | registry.osism.tech/osism/homer v25.05.2 e2c78a28297e 9 hours ago 11.5MB 2025-06-22 12:28:09.594148 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 31eca7c9891c 9 hours ago 226MB 2025-06-22 12:28:09.594161 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250530.0 f5f0b51afbcc 2 weeks ago 574MB 2025-06-22 12:28:09.594172 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250531.0 eb6fb0ff8e52 3 weeks ago 578MB 2025-06-22 12:28:09.594206 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 3 weeks ago 319MB 2025-06-22 12:28:09.594218 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 3 weeks ago 747MB 2025-06-22 12:28:09.594229 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 3 weeks ago 629MB 2025-06-22 12:28:09.594240 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250530 48bb7d2c6b08 3 weeks ago 892MB 2025-06-22 12:28:09.594251 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250530 3d4c4d6fe7fa 3 weeks ago 361MB 2025-06-22 12:28:09.594262 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 3 weeks ago 411MB 2025-06-22 12:28:09.594273 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 3 weeks ago 359MB 2025-06-22 12:28:09.594284 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250530 0e447338580d 3 weeks ago 457MB 2025-06-22 12:28:09.594295 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250530.0 bce894afc91f 3 weeks ago 538MB 2025-06-22 12:28:09.594328 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250530.0 467731c31786 3 weeks ago 1.21GB 2025-06-22 12:28:09.594340 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250530.0 1b4e0cdc5cdd 3 weeks ago 308MB 2025-06-22 12:28:09.594351 | orchestrator | registry.osism.tech/osism/osism 0.20250530.0 bce098659f68 3 weeks ago 297MB 2025-06-22 12:28:09.594362 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 3 weeks ago 41.4MB 2025-06-22 12:28:09.594373 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.1 ff0a241c8a0a 3 weeks ago 224MB 2025-06-22 12:28:09.594384 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 6 weeks ago 453MB 2025-06-22 12:28:09.594395 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 6b3ebe9793bb 4 months ago 328MB 2025-06-22 12:28:09.594406 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 4 months ago 571MB 2025-06-22 12:28:09.594417 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 9 months ago 300MB 2025-06-22 12:28:09.594428 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 12 months ago 146MB 2025-06-22 12:28:09.847089 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-22 12:28:09.848195 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-22 12:28:09.903323 | orchestrator | 2025-06-22 12:28:09.903393 | orchestrator | ## Containers @ testbed-node-0 2025-06-22 12:28:09.903407 | orchestrator | 2025-06-22 12:28:09.903420 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-22 12:28:09.903433 | orchestrator | + echo 2025-06-22 12:28:09.903444 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-06-22 12:28:09.903457 | orchestrator | + echo 2025-06-22 12:28:09.903468 | orchestrator | + osism container testbed-node-0 ps 2025-06-22 12:28:12.055755 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-22 12:28:12.055854 | orchestrator | f2a2458c37fe registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-06-22 12:28:12.055865 | orchestrator | ce053de8b0e2 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-22 12:28:12.055875 | orchestrator | f2508ac1afe2 registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-22 12:28:12.055882 | orchestrator | 8f0b8e47a18d registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-22 12:28:12.055889 | orchestrator | fc615c393803 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-06-22 12:28:12.055897 | orchestrator | 91b7cf499d0f registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) glance_api 2025-06-22 12:28:12.055904 | orchestrator | 0e7f41816a80 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-22 12:28:12.055925 | orchestrator | 7ad1317ed7ed registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-22 12:28:12.055932 | orchestrator | ef83be3c7c60 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 12 minutes (healthy) magnum_conductor 2025-06-22 12:28:12.055955 | orchestrator | 0b599194c0bf registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-06-22 12:28:12.055963 | orchestrator | 51ee0ac7ba0e registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2025-06-22 12:28:12.055970 | orchestrator | 6970fff94bc1 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-06-22 12:28:12.055977 | orchestrator | 268d0f9463bf registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-06-22 12:28:12.055986 | orchestrator | d94716d44e52 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-06-22 12:28:12.055993 | orchestrator | 84da6c9d0dee registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-06-22 12:28:12.056000 | orchestrator | 2764f411de95 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-06-22 12:28:12.056007 | orchestrator | af644ceba760 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-06-22 12:28:12.056015 | orchestrator | 9dcdc6a4317d registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-06-22 12:28:12.056022 | orchestrator | cd54d6383bf6 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-06-22 12:28:12.056044 | orchestrator | e92ef1fc5d83 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-06-22 12:28:12.056052 | orchestrator | 99488cce3706 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-06-22 12:28:12.056060 | orchestrator | 21a3b2e3d3e1 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-06-22 12:28:12.056067 | orchestrator | d9242862baf7 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-06-22 12:28:12.056075 | orchestrator | 52017da5e132 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-06-22 12:28:12.056082 | orchestrator | b4199ca58b0f registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-06-22 12:28:12.056090 | orchestrator | 73a03a112abb registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-06-22 12:28:12.056101 | orchestrator | 05bd0c6bf8b2 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-0 2025-06-22 12:28:12.056113 | orchestrator | 144590abc03f registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-06-22 12:28:12.056120 | orchestrator | d88dbe11bf50 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-06-22 12:28:12.056128 | orchestrator | 50cae5cbcd73 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-06-22 12:28:12.056139 | orchestrator | e7d2b96f87f1 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2025-06-22 12:28:12.056147 | orchestrator | 96d52d956695 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-06-22 12:28:12.056154 | orchestrator | e48bcc154ab8 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-06-22 12:28:12.056162 | orchestrator | 2d1239255473 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-06-22 12:28:12.056169 | orchestrator | 2cf38cc3c6b4 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-0 2025-06-22 12:28:12.056176 | orchestrator | 3b77807e5d69 registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-06-22 12:28:12.056183 | orchestrator | 4b564ee521fb registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-06-22 12:28:12.056190 | orchestrator | d4a20f141cb9 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-06-22 12:28:12.056201 | orchestrator | cc51de2e0d35 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-06-22 12:28:12.056208 | orchestrator | 538298b823a0 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-06-22 12:28:12.056222 | orchestrator | 729934b39bfd registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-06-22 12:28:12.056229 | orchestrator | 857f4d163014 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-06-22 12:28:12.056237 | orchestrator | bc12daefa5cc registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2025-06-22 12:28:12.056244 | orchestrator | 50c15c865f67 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-06-22 12:28:12.056251 | orchestrator | 904ca069c1d7 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-06-22 12:28:12.056262 | orchestrator | 5f9b6ed4a1e4 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-06-22 12:28:12.056270 | orchestrator | df28e7afc43f registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-06-22 12:28:12.056277 | orchestrator | a63b75250483 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-06-22 12:28:12.056284 | orchestrator | 03e98bd7eddc registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-06-22 12:28:12.056292 | orchestrator | be8f53f5e1d5 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 31 minutes ago Up 30 minutes cron 2025-06-22 12:28:12.056299 | orchestrator | a21c7b1071f4 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2025-06-22 12:28:12.056307 | orchestrator | 3a1c3ae61921 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-06-22 12:28:12.296318 | orchestrator | 2025-06-22 12:28:12.296433 | orchestrator | ## Images @ testbed-node-0 2025-06-22 12:28:12.296447 | orchestrator | 2025-06-22 12:28:12.296456 | orchestrator | + echo 2025-06-22 12:28:12.296465 | orchestrator | + echo '## Images @ testbed-node-0' 2025-06-22 12:28:12.296475 | orchestrator | + echo 2025-06-22 12:28:12.296483 | orchestrator | + osism container testbed-node-0 images 2025-06-22 12:28:14.385072 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-22 12:28:14.385185 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 3 weeks ago 319MB 2025-06-22 12:28:14.385201 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 3 weeks ago 319MB 2025-06-22 12:28:14.385213 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 3 weeks ago 330MB 2025-06-22 12:28:14.385223 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 3 weeks ago 1.59GB 2025-06-22 12:28:14.385235 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 3 weeks ago 1.55GB 2025-06-22 12:28:14.385245 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 3 weeks ago 419MB 2025-06-22 12:28:14.385256 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 3 weeks ago 747MB 2025-06-22 12:28:14.385267 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 3 weeks ago 376MB 2025-06-22 12:28:14.385278 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 3 weeks ago 327MB 2025-06-22 12:28:14.385289 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 3 weeks ago 629MB 2025-06-22 12:28:14.385300 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 3 weeks ago 1.01GB 2025-06-22 12:28:14.385310 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 3 weeks ago 591MB 2025-06-22 12:28:14.385321 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 3 weeks ago 354MB 2025-06-22 12:28:14.385356 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 3 weeks ago 352MB 2025-06-22 12:28:14.385368 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 3 weeks ago 411MB 2025-06-22 12:28:14.385379 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 3 weeks ago 345MB 2025-06-22 12:28:14.385389 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 3 weeks ago 359MB 2025-06-22 12:28:14.385400 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 3 weeks ago 325MB 2025-06-22 12:28:14.385411 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 3 weeks ago 326MB 2025-06-22 12:28:14.385440 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 3 weeks ago 1.21GB 2025-06-22 12:28:14.385451 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 3 weeks ago 362MB 2025-06-22 12:28:14.385462 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 3 weeks ago 362MB 2025-06-22 12:28:14.385472 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 3 weeks ago 1.15GB 2025-06-22 12:28:14.385483 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 3 weeks ago 1.04GB 2025-06-22 12:28:14.385494 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 3 weeks ago 1.25GB 2025-06-22 12:28:14.385504 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250530 ec3349a6437e 3 weeks ago 1.04GB 2025-06-22 12:28:14.385515 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250530 726d5cfde6f9 3 weeks ago 1.04GB 2025-06-22 12:28:14.385525 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250530 c2f966fc60ed 3 weeks ago 1.04GB 2025-06-22 12:28:14.385536 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250530 7c85bdb64788 3 weeks ago 1.04GB 2025-06-22 12:28:14.385546 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 3 weeks ago 1.2GB 2025-06-22 12:28:14.385557 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 3 weeks ago 1.31GB 2025-06-22 12:28:14.385639 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 3 weeks ago 1.12GB 2025-06-22 12:28:14.385655 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 3 weeks ago 1.12GB 2025-06-22 12:28:14.385667 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 3 weeks ago 1.1GB 2025-06-22 12:28:14.385680 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 3 weeks ago 1.1GB 2025-06-22 12:28:14.385692 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 3 weeks ago 1.1GB 2025-06-22 12:28:14.385705 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 3 weeks ago 1.41GB 2025-06-22 12:28:14.385717 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 3 weeks ago 1.41GB 2025-06-22 12:28:14.385729 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 3 weeks ago 1.06GB 2025-06-22 12:28:14.385741 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 3 weeks ago 1.06GB 2025-06-22 12:28:14.385762 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 3 weeks ago 1.05GB 2025-06-22 12:28:14.385773 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 3 weeks ago 1.05GB 2025-06-22 12:28:14.385784 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 3 weeks ago 1.05GB 2025-06-22 12:28:14.385795 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 3 weeks ago 1.05GB 2025-06-22 12:28:14.385812 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250530 aa9066568160 3 weeks ago 1.04GB 2025-06-22 12:28:14.385823 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250530 546dea2f2472 3 weeks ago 1.04GB 2025-06-22 12:28:14.385833 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 3 weeks ago 1.3GB 2025-06-22 12:28:14.385844 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 3 weeks ago 1.29GB 2025-06-22 12:28:14.385855 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 3 weeks ago 1.42GB 2025-06-22 12:28:14.385866 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 3 weeks ago 1.29GB 2025-06-22 12:28:14.385876 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 3 weeks ago 1.06GB 2025-06-22 12:28:14.385894 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 3 weeks ago 1.06GB 2025-06-22 12:28:14.385913 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 3 weeks ago 1.06GB 2025-06-22 12:28:14.385933 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 3 weeks ago 1.11GB 2025-06-22 12:28:14.385955 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 3 weeks ago 1.13GB 2025-06-22 12:28:14.385974 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 3 weeks ago 1.11GB 2025-06-22 12:28:14.385990 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250530 df0a04869ff0 3 weeks ago 1.11GB 2025-06-22 12:28:14.386001 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250530 e1b2b0cc8e5c 3 weeks ago 1.12GB 2025-06-22 12:28:14.386012 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 3 weeks ago 947MB 2025-06-22 12:28:14.386145 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 3 weeks ago 947MB 2025-06-22 12:28:14.386157 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 3 weeks ago 948MB 2025-06-22 12:28:14.386168 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 3 weeks ago 948MB 2025-06-22 12:28:14.386179 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 6 weeks ago 1.27GB 2025-06-22 12:28:14.679785 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-22 12:28:14.680369 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-22 12:28:14.746718 | orchestrator | 2025-06-22 12:28:14.746797 | orchestrator | ## Containers @ testbed-node-1 2025-06-22 12:28:14.746806 | orchestrator | 2025-06-22 12:28:14.746814 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-22 12:28:14.746821 | orchestrator | + echo 2025-06-22 12:28:14.746829 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-06-22 12:28:14.746837 | orchestrator | + echo 2025-06-22 12:28:14.746870 | orchestrator | + osism container testbed-node-1 ps 2025-06-22 12:28:16.897403 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-22 12:28:16.897515 | orchestrator | 3ec7438c9a4b registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-06-22 12:28:16.897532 | orchestrator | f9f53efab643 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-22 12:28:16.897544 | orchestrator | a9b380d95c6c registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-22 12:28:16.897556 | orchestrator | 020f90e5ce5a registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-06-22 12:28:16.897633 | orchestrator | 8680bda815ba registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-22 12:28:16.897646 | orchestrator | ee1243f029f9 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2025-06-22 12:28:16.897658 | orchestrator | d1996632f879 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-22 12:28:16.897668 | orchestrator | d4d4d33cbca2 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-22 12:28:16.897679 | orchestrator | 5f80517e3af4 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) magnum_conductor 2025-06-22 12:28:16.897690 | orchestrator | 2783656c0558 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-06-22 12:28:16.897701 | orchestrator | bb359d6234a0 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2025-06-22 12:28:16.897712 | orchestrator | 778c0e64a24d registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-06-22 12:28:16.897722 | orchestrator | be361c551d48 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-06-22 12:28:16.897736 | orchestrator | 07df52ff8edf registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-06-22 12:28:16.897747 | orchestrator | cb7a7207c2a0 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-06-22 12:28:16.897758 | orchestrator | e25292134e43 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-06-22 12:28:16.897769 | orchestrator | 26a1d5897a9d registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-06-22 12:28:16.897805 | orchestrator | f645dac06743 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-06-22 12:28:16.897817 | orchestrator | 579bea29e6da registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-06-22 12:28:16.897846 | orchestrator | 5defc61e773d registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-06-22 12:28:16.897858 | orchestrator | 8a982c7597b3 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-06-22 12:28:16.897869 | orchestrator | 07089c5c418c registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-06-22 12:28:16.897880 | orchestrator | ea50321ad6c5 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-06-22 12:28:16.897896 | orchestrator | 8f4ecfc61db5 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-06-22 12:28:16.897914 | orchestrator | 1621298f6b40 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-06-22 12:28:16.897926 | orchestrator | e81c64ed6165 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-06-22 12:28:16.897939 | orchestrator | 2e7a09eb8c79 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-1 2025-06-22 12:28:16.897951 | orchestrator | 638bc9825392 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-06-22 12:28:16.897964 | orchestrator | 835e2da9b857 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-06-22 12:28:16.897976 | orchestrator | 52f7a2d41973 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-06-22 12:28:16.897989 | orchestrator | 91a0c1e49a28 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-06-22 12:28:16.898000 | orchestrator | 2c17d07d2404 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-06-22 12:28:16.898082 | orchestrator | 0a087b5b8350 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-06-22 12:28:16.898097 | orchestrator | 857bae6473d6 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-06-22 12:28:16.898110 | orchestrator | 0a1496b76773 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2025-06-22 12:28:16.898132 | orchestrator | a79f2666650a registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-06-22 12:28:16.898144 | orchestrator | 65e14afd6516 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-06-22 12:28:16.898156 | orchestrator | 5f102c9d491e registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-06-22 12:28:16.898168 | orchestrator | 5d0d239e3f50 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-06-22 12:28:16.898181 | orchestrator | 2ac90b1e22af registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-06-22 12:28:16.898204 | orchestrator | fc788046bf07 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2025-06-22 12:28:16.898217 | orchestrator | 50318c58ba55 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-06-22 12:28:16.898229 | orchestrator | 0938b891831c registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-06-22 12:28:16.898241 | orchestrator | 025e7b2839dd registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2025-06-22 12:28:16.898254 | orchestrator | fc4429b952e7 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-06-22 12:28:16.898267 | orchestrator | b08b5fb9c4c6 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-06-22 12:28:16.898280 | orchestrator | 516f4b7a6fd9 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-06-22 12:28:16.898298 | orchestrator | 8240a2f98b57 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-06-22 12:28:16.898309 | orchestrator | 7ae6b0284603 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-06-22 12:28:16.898320 | orchestrator | 1f6edb5e0d3b registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-06-22 12:28:16.898331 | orchestrator | fe750092549c registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-06-22 12:28:16.898342 | orchestrator | 55d56aa23281 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-06-22 12:28:17.134243 | orchestrator | 2025-06-22 12:28:17.134329 | orchestrator | ## Images @ testbed-node-1 2025-06-22 12:28:17.134344 | orchestrator | 2025-06-22 12:28:17.134356 | orchestrator | + echo 2025-06-22 12:28:17.134368 | orchestrator | + echo '## Images @ testbed-node-1' 2025-06-22 12:28:17.134380 | orchestrator | + echo 2025-06-22 12:28:17.134391 | orchestrator | + osism container testbed-node-1 images 2025-06-22 12:28:19.175849 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-22 12:28:19.175996 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 3 weeks ago 319MB 2025-06-22 12:28:19.176068 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 3 weeks ago 319MB 2025-06-22 12:28:19.176084 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 3 weeks ago 330MB 2025-06-22 12:28:19.176095 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 3 weeks ago 1.59GB 2025-06-22 12:28:19.176106 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 3 weeks ago 1.55GB 2025-06-22 12:28:19.176117 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 3 weeks ago 419MB 2025-06-22 12:28:19.176128 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 3 weeks ago 747MB 2025-06-22 12:28:19.176139 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 3 weeks ago 327MB 2025-06-22 12:28:19.176150 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 3 weeks ago 376MB 2025-06-22 12:28:19.176161 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 3 weeks ago 629MB 2025-06-22 12:28:19.176172 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 3 weeks ago 1.01GB 2025-06-22 12:28:19.176183 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 3 weeks ago 591MB 2025-06-22 12:28:19.176194 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 3 weeks ago 354MB 2025-06-22 12:28:19.176205 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 3 weeks ago 352MB 2025-06-22 12:28:19.176216 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 3 weeks ago 411MB 2025-06-22 12:28:19.176227 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 3 weeks ago 345MB 2025-06-22 12:28:19.176238 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 3 weeks ago 359MB 2025-06-22 12:28:19.176249 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 3 weeks ago 326MB 2025-06-22 12:28:19.176260 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 3 weeks ago 325MB 2025-06-22 12:28:19.176270 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 3 weeks ago 1.21GB 2025-06-22 12:28:19.176282 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 3 weeks ago 362MB 2025-06-22 12:28:19.176293 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 3 weeks ago 362MB 2025-06-22 12:28:19.176304 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 3 weeks ago 1.15GB 2025-06-22 12:28:19.176315 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 3 weeks ago 1.04GB 2025-06-22 12:28:19.176326 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 3 weeks ago 1.25GB 2025-06-22 12:28:19.176337 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 3 weeks ago 1.2GB 2025-06-22 12:28:19.176356 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 3 weeks ago 1.31GB 2025-06-22 12:28:19.176367 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 3 weeks ago 1.41GB 2025-06-22 12:28:19.176380 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 3 weeks ago 1.41GB 2025-06-22 12:28:19.176393 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 3 weeks ago 1.06GB 2025-06-22 12:28:19.176405 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 3 weeks ago 1.06GB 2025-06-22 12:28:19.176458 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 3 weeks ago 1.05GB 2025-06-22 12:28:19.176471 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 3 weeks ago 1.05GB 2025-06-22 12:28:19.176483 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 3 weeks ago 1.05GB 2025-06-22 12:28:19.176496 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 3 weeks ago 1.05GB 2025-06-22 12:28:19.176508 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 3 weeks ago 1.3GB 2025-06-22 12:28:19.176521 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 3 weeks ago 1.29GB 2025-06-22 12:28:19.176532 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 3 weeks ago 1.42GB 2025-06-22 12:28:19.176543 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 3 weeks ago 1.29GB 2025-06-22 12:28:19.176554 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 3 weeks ago 1.06GB 2025-06-22 12:28:19.176565 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 3 weeks ago 1.06GB 2025-06-22 12:28:19.176576 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 3 weeks ago 1.06GB 2025-06-22 12:28:19.176643 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 3 weeks ago 1.11GB 2025-06-22 12:28:19.176657 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 3 weeks ago 1.13GB 2025-06-22 12:28:19.176668 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 3 weeks ago 1.11GB 2025-06-22 12:28:19.176679 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 3 weeks ago 947MB 2025-06-22 12:28:19.176690 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 3 weeks ago 947MB 2025-06-22 12:28:19.176700 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 3 weeks ago 948MB 2025-06-22 12:28:19.176711 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 3 weeks ago 948MB 2025-06-22 12:28:19.176722 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 6 weeks ago 1.27GB 2025-06-22 12:28:19.434125 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-22 12:28:19.434412 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-22 12:28:19.484409 | orchestrator | 2025-06-22 12:28:19.484493 | orchestrator | ## Containers @ testbed-node-2 2025-06-22 12:28:19.484507 | orchestrator | 2025-06-22 12:28:19.484519 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-22 12:28:19.484530 | orchestrator | + echo 2025-06-22 12:28:19.484575 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-06-22 12:28:19.484610 | orchestrator | + echo 2025-06-22 12:28:19.484622 | orchestrator | + osism container testbed-node-2 ps 2025-06-22 12:28:21.585120 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-22 12:28:21.585223 | orchestrator | d4f077b56630 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-06-22 12:28:21.585238 | orchestrator | 190a658a0301 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-22 12:28:21.585250 | orchestrator | ab8b86cb521a registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-22 12:28:21.585261 | orchestrator | b7e72b0e4d31 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-06-22 12:28:21.585272 | orchestrator | a5e4e76b964d registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-22 12:28:21.585283 | orchestrator | 041cb5a8ff50 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-22 12:28:21.585448 | orchestrator | 568aa6db1a82 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-22 12:28:21.585556 | orchestrator | 384114d3b7d7 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-22 12:28:21.585652 | orchestrator | a1e9982bfdf5 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-06-22 12:28:21.585679 | orchestrator | 50eb8eebca18 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-06-22 12:28:21.585692 | orchestrator | 2d698af62671 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2025-06-22 12:28:21.585703 | orchestrator | 9bc890e2f06c registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-06-22 12:28:21.585714 | orchestrator | 1ebbd2d55780 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-06-22 12:28:21.585728 | orchestrator | cd8d39bfa3ce registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-06-22 12:28:21.585739 | orchestrator | 298a09122142 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-06-22 12:28:21.585769 | orchestrator | 1af506110dee registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-06-22 12:28:21.585781 | orchestrator | 6ff228898865 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-06-22 12:28:21.585815 | orchestrator | 9e469d0f84bd registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-06-22 12:28:21.585884 | orchestrator | 7ddbb95eacee registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-06-22 12:28:21.585897 | orchestrator | ebb797b739d0 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-06-22 12:28:21.585907 | orchestrator | f4c16918d252 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 16 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-06-22 12:28:21.585918 | orchestrator | f37982901499 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-06-22 12:28:21.585929 | orchestrator | 2daf111aeff8 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-06-22 12:28:21.585940 | orchestrator | 42c6db4e9e4b registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-06-22 12:28:21.585951 | orchestrator | 460b3ab83f67 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-06-22 12:28:21.585962 | orchestrator | 941aba099363 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-06-22 12:28:21.585975 | orchestrator | 7c3789c7e27b registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-2 2025-06-22 12:28:21.586006 | orchestrator | 7c9602e79a61 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-06-22 12:28:21.586174 | orchestrator | 83c1d0652db2 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-06-22 12:28:21.586191 | orchestrator | 5bb159621496 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-06-22 12:28:21.586203 | orchestrator | 3a162ea3f201 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-06-22 12:28:21.586214 | orchestrator | fa376ce19865 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-06-22 12:28:21.586224 | orchestrator | 51719003b568 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-06-22 12:28:21.586338 | orchestrator | 89d2e0e1c13e registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-06-22 12:28:21.586409 | orchestrator | 68bb43abdc92 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-2 2025-06-22 12:28:21.586448 | orchestrator | 5519743dd672 registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-06-22 12:28:21.586461 | orchestrator | 2bb6bafe1777 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-06-22 12:28:21.586472 | orchestrator | f1e2e7f57f80 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-06-22 12:28:21.586483 | orchestrator | 64846b98d457 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-06-22 12:28:21.586494 | orchestrator | 911775c4fbf7 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-06-22 12:28:21.586506 | orchestrator | eca094c3293d registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2025-06-22 12:28:21.586517 | orchestrator | ac82431ce823 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-06-22 12:28:21.586528 | orchestrator | 7affc0e5aae1 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-06-22 12:28:21.586540 | orchestrator | c9385fc5c0a0 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-2 2025-06-22 12:28:21.586553 | orchestrator | f3e6f334570b registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-06-22 12:28:21.586564 | orchestrator | 84ab8d50d690 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-06-22 12:28:21.586575 | orchestrator | 6b3685366680 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-06-22 12:28:21.586613 | orchestrator | 5df1d734d1ba registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-06-22 12:28:21.586643 | orchestrator | 060111ee18fc registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-06-22 12:28:21.586655 | orchestrator | 9c5f99b57df9 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-06-22 12:28:21.586667 | orchestrator | 1064d7a371c4 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-06-22 12:28:21.586679 | orchestrator | 9159ae310260 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-06-22 12:28:21.835320 | orchestrator | 2025-06-22 12:28:21.835411 | orchestrator | ## Images @ testbed-node-2 2025-06-22 12:28:21.835425 | orchestrator | 2025-06-22 12:28:21.835437 | orchestrator | + echo 2025-06-22 12:28:21.835449 | orchestrator | + echo '## Images @ testbed-node-2' 2025-06-22 12:28:21.835461 | orchestrator | + echo 2025-06-22 12:28:21.835494 | orchestrator | + osism container testbed-node-2 images 2025-06-22 12:28:23.935540 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-22 12:28:23.935682 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 3 weeks ago 319MB 2025-06-22 12:28:23.935699 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 3 weeks ago 319MB 2025-06-22 12:28:23.935711 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 3 weeks ago 330MB 2025-06-22 12:28:23.935739 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 3 weeks ago 1.59GB 2025-06-22 12:28:23.935751 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 3 weeks ago 1.55GB 2025-06-22 12:28:23.935762 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 3 weeks ago 419MB 2025-06-22 12:28:23.935772 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 3 weeks ago 747MB 2025-06-22 12:28:23.935783 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 3 weeks ago 327MB 2025-06-22 12:28:23.935794 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 3 weeks ago 376MB 2025-06-22 12:28:23.935805 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 3 weeks ago 629MB 2025-06-22 12:28:23.935815 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 3 weeks ago 1.01GB 2025-06-22 12:28:23.935826 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 3 weeks ago 591MB 2025-06-22 12:28:23.935836 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 3 weeks ago 354MB 2025-06-22 12:28:23.935847 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 3 weeks ago 352MB 2025-06-22 12:28:23.935858 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 3 weeks ago 411MB 2025-06-22 12:28:23.935869 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 3 weeks ago 345MB 2025-06-22 12:28:23.935880 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 3 weeks ago 359MB 2025-06-22 12:28:23.935891 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 3 weeks ago 325MB 2025-06-22 12:28:23.935902 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 3 weeks ago 326MB 2025-06-22 12:28:23.935912 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 3 weeks ago 1.21GB 2025-06-22 12:28:23.935923 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 3 weeks ago 362MB 2025-06-22 12:28:23.935934 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 3 weeks ago 362MB 2025-06-22 12:28:23.935944 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 3 weeks ago 1.15GB 2025-06-22 12:28:23.935955 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 3 weeks ago 1.04GB 2025-06-22 12:28:23.935966 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 3 weeks ago 1.25GB 2025-06-22 12:28:23.935998 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 3 weeks ago 1.2GB 2025-06-22 12:28:23.936009 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 3 weeks ago 1.31GB 2025-06-22 12:28:23.936020 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 3 weeks ago 1.41GB 2025-06-22 12:28:23.936030 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 3 weeks ago 1.41GB 2025-06-22 12:28:23.936041 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 3 weeks ago 1.06GB 2025-06-22 12:28:23.936052 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 3 weeks ago 1.06GB 2025-06-22 12:28:23.936079 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 3 weeks ago 1.05GB 2025-06-22 12:28:23.936093 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 3 weeks ago 1.05GB 2025-06-22 12:28:23.936106 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 3 weeks ago 1.05GB 2025-06-22 12:28:23.936118 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 3 weeks ago 1.05GB 2025-06-22 12:28:23.936130 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 3 weeks ago 1.3GB 2025-06-22 12:28:23.936142 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 3 weeks ago 1.29GB 2025-06-22 12:28:23.936154 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 3 weeks ago 1.42GB 2025-06-22 12:28:23.936166 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 3 weeks ago 1.29GB 2025-06-22 12:28:23.936178 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 3 weeks ago 1.06GB 2025-06-22 12:28:23.936190 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 3 weeks ago 1.06GB 2025-06-22 12:28:23.936202 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 3 weeks ago 1.06GB 2025-06-22 12:28:23.936215 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 3 weeks ago 1.11GB 2025-06-22 12:28:23.936227 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 3 weeks ago 1.13GB 2025-06-22 12:28:23.936240 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 3 weeks ago 1.11GB 2025-06-22 12:28:23.936252 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 3 weeks ago 947MB 2025-06-22 12:28:23.936264 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 3 weeks ago 947MB 2025-06-22 12:28:23.936276 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 3 weeks ago 948MB 2025-06-22 12:28:23.936288 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 3 weeks ago 948MB 2025-06-22 12:28:23.936300 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 6 weeks ago 1.27GB 2025-06-22 12:28:24.198230 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-06-22 12:28:24.204791 | orchestrator | + set -e 2025-06-22 12:28:24.204833 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 12:28:24.205394 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 12:28:24.205450 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 12:28:24.205463 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 12:28:24.205474 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 12:28:24.205486 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 12:28:24.205498 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 12:28:24.205517 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 12:28:24.205528 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 12:28:24.205539 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 12:28:24.205550 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 12:28:24.205561 | orchestrator | ++ export ARA=false 2025-06-22 12:28:24.205572 | orchestrator | ++ ARA=false 2025-06-22 12:28:24.205583 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 12:28:24.205625 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 12:28:24.205636 | orchestrator | ++ export TEMPEST=false 2025-06-22 12:28:24.205648 | orchestrator | ++ TEMPEST=false 2025-06-22 12:28:24.205658 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 12:28:24.205669 | orchestrator | ++ IS_ZUUL=true 2025-06-22 12:28:24.205687 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.200 2025-06-22 12:28:24.205698 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.200 2025-06-22 12:28:24.205709 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 12:28:24.205720 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 12:28:24.205730 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 12:28:24.205741 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 12:28:24.205752 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 12:28:24.205763 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 12:28:24.205773 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 12:28:24.205784 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 12:28:24.205795 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-22 12:28:24.205806 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-06-22 12:28:24.215974 | orchestrator | + set -e 2025-06-22 12:28:24.216080 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 12:28:24.216104 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 12:28:24.216125 | orchestrator | ++ INTERACTIVE=false 2025-06-22 12:28:24.216144 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 12:28:24.216162 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 12:28:24.216180 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-22 12:28:24.216625 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-22 12:28:24.219700 | orchestrator | 2025-06-22 12:28:24.219753 | orchestrator | # Ceph status 2025-06-22 12:28:24.219768 | orchestrator | 2025-06-22 12:28:24.219782 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 12:28:24.219794 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 12:28:24.219805 | orchestrator | + echo 2025-06-22 12:28:24.219821 | orchestrator | + echo '# Ceph status' 2025-06-22 12:28:24.219833 | orchestrator | + echo 2025-06-22 12:28:24.219845 | orchestrator | + ceph -s 2025-06-22 12:28:24.816832 | orchestrator | cluster: 2025-06-22 12:28:24.816963 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-06-22 12:28:24.816992 | orchestrator | health: HEALTH_OK 2025-06-22 12:28:24.817014 | orchestrator | 2025-06-22 12:28:24.817033 | orchestrator | services: 2025-06-22 12:28:24.817048 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2025-06-22 12:28:24.817061 | orchestrator | mgr: testbed-node-2(active, since 16m), standbys: testbed-node-1, testbed-node-0 2025-06-22 12:28:24.817073 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-06-22 12:28:24.817085 | orchestrator | osd: 6 osds: 6 up (since 25m), 6 in (since 25m) 2025-06-22 12:28:24.817096 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-06-22 12:28:24.817107 | orchestrator | 2025-06-22 12:28:24.817118 | orchestrator | data: 2025-06-22 12:28:24.817129 | orchestrator | volumes: 1/1 healthy 2025-06-22 12:28:24.817140 | orchestrator | pools: 14 pools, 401 pgs 2025-06-22 12:28:24.817151 | orchestrator | objects: 524 objects, 2.2 GiB 2025-06-22 12:28:24.817162 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-06-22 12:28:24.817173 | orchestrator | pgs: 401 active+clean 2025-06-22 12:28:24.817184 | orchestrator | 2025-06-22 12:28:24.869763 | orchestrator | 2025-06-22 12:28:24.869875 | orchestrator | # Ceph versions 2025-06-22 12:28:24.869898 | orchestrator | 2025-06-22 12:28:24.869918 | orchestrator | + echo 2025-06-22 12:28:24.869936 | orchestrator | + echo '# Ceph versions' 2025-06-22 12:28:24.869954 | orchestrator | + echo 2025-06-22 12:28:24.869971 | orchestrator | + ceph versions 2025-06-22 12:28:25.434195 | orchestrator | { 2025-06-22 12:28:25.434301 | orchestrator | "mon": { 2025-06-22 12:28:25.434343 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-22 12:28:25.434358 | orchestrator | }, 2025-06-22 12:28:25.434369 | orchestrator | "mgr": { 2025-06-22 12:28:25.434381 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-22 12:28:25.434392 | orchestrator | }, 2025-06-22 12:28:25.434404 | orchestrator | "osd": { 2025-06-22 12:28:25.434415 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-06-22 12:28:25.434427 | orchestrator | }, 2025-06-22 12:28:25.434452 | orchestrator | "mds": { 2025-06-22 12:28:25.434464 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-22 12:28:25.434475 | orchestrator | }, 2025-06-22 12:28:25.434487 | orchestrator | "rgw": { 2025-06-22 12:28:25.434498 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-22 12:28:25.434509 | orchestrator | }, 2025-06-22 12:28:25.434521 | orchestrator | "overall": { 2025-06-22 12:28:25.434533 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-06-22 12:28:25.434545 | orchestrator | } 2025-06-22 12:28:25.434556 | orchestrator | } 2025-06-22 12:28:25.492876 | orchestrator | 2025-06-22 12:28:25.492991 | orchestrator | # Ceph OSD tree 2025-06-22 12:28:25.493018 | orchestrator | 2025-06-22 12:28:25.493040 | orchestrator | + echo 2025-06-22 12:28:25.493052 | orchestrator | + echo '# Ceph OSD tree' 2025-06-22 12:28:25.493064 | orchestrator | + echo 2025-06-22 12:28:25.493075 | orchestrator | + ceph osd df tree 2025-06-22 12:28:26.018139 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-06-22 12:28:26.018257 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-06-22 12:28:26.018272 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-06-22 12:28:26.018284 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.31 1.24 200 up osd.0 2025-06-22 12:28:26.018295 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 924 MiB 851 MiB 1 KiB 74 MiB 19 GiB 4.52 0.76 190 up osd.4 2025-06-22 12:28:26.018306 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-06-22 12:28:26.018317 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.36 1.07 184 up osd.1 2025-06-22 12:28:26.018328 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.48 0.93 204 up osd.3 2025-06-22 12:28:26.018338 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-06-22 12:28:26.018349 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.98 1.18 206 up osd.2 2025-06-22 12:28:26.018360 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 992 MiB 923 MiB 1 KiB 70 MiB 19 GiB 4.85 0.82 186 up osd.5 2025-06-22 12:28:26.018371 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-06-22 12:28:26.018382 | orchestrator | MIN/MAX VAR: 0.76/1.24 STDDEV: 1.05 2025-06-22 12:28:26.063505 | orchestrator | 2025-06-22 12:28:26.063662 | orchestrator | # Ceph monitor status 2025-06-22 12:28:26.063679 | orchestrator | 2025-06-22 12:28:26.063691 | orchestrator | + echo 2025-06-22 12:28:26.063703 | orchestrator | + echo '# Ceph monitor status' 2025-06-22 12:28:26.063715 | orchestrator | + echo 2025-06-22 12:28:26.063725 | orchestrator | + ceph mon stat 2025-06-22 12:28:26.644555 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-06-22 12:28:26.693479 | orchestrator | 2025-06-22 12:28:26.693557 | orchestrator | # Ceph quorum status 2025-06-22 12:28:26.693567 | orchestrator | 2025-06-22 12:28:26.693575 | orchestrator | + echo 2025-06-22 12:28:26.693583 | orchestrator | + echo '# Ceph quorum status' 2025-06-22 12:28:26.693628 | orchestrator | + echo 2025-06-22 12:28:26.693834 | orchestrator | + ceph quorum_status 2025-06-22 12:28:26.694087 | orchestrator | + jq 2025-06-22 12:28:27.344107 | orchestrator | { 2025-06-22 12:28:27.344224 | orchestrator | "election_epoch": 6, 2025-06-22 12:28:27.344248 | orchestrator | "quorum": [ 2025-06-22 12:28:27.344269 | orchestrator | 0, 2025-06-22 12:28:27.344287 | orchestrator | 1, 2025-06-22 12:28:27.344298 | orchestrator | 2 2025-06-22 12:28:27.344309 | orchestrator | ], 2025-06-22 12:28:27.344320 | orchestrator | "quorum_names": [ 2025-06-22 12:28:27.344331 | orchestrator | "testbed-node-0", 2025-06-22 12:28:27.344342 | orchestrator | "testbed-node-1", 2025-06-22 12:28:27.344353 | orchestrator | "testbed-node-2" 2025-06-22 12:28:27.344364 | orchestrator | ], 2025-06-22 12:28:27.344376 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-06-22 12:28:27.344388 | orchestrator | "quorum_age": 1737, 2025-06-22 12:28:27.344399 | orchestrator | "features": { 2025-06-22 12:28:27.344410 | orchestrator | "quorum_con": "4540138322906710015", 2025-06-22 12:28:27.344422 | orchestrator | "quorum_mon": [ 2025-06-22 12:28:27.344432 | orchestrator | "kraken", 2025-06-22 12:28:27.344443 | orchestrator | "luminous", 2025-06-22 12:28:27.344454 | orchestrator | "mimic", 2025-06-22 12:28:27.344465 | orchestrator | "osdmap-prune", 2025-06-22 12:28:27.344476 | orchestrator | "nautilus", 2025-06-22 12:28:27.344487 | orchestrator | "octopus", 2025-06-22 12:28:27.344498 | orchestrator | "pacific", 2025-06-22 12:28:27.344508 | orchestrator | "elector-pinging", 2025-06-22 12:28:27.344519 | orchestrator | "quincy", 2025-06-22 12:28:27.344530 | orchestrator | "reef" 2025-06-22 12:28:27.344541 | orchestrator | ] 2025-06-22 12:28:27.344552 | orchestrator | }, 2025-06-22 12:28:27.344563 | orchestrator | "monmap": { 2025-06-22 12:28:27.344574 | orchestrator | "epoch": 1, 2025-06-22 12:28:27.344656 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-06-22 12:28:27.344671 | orchestrator | "modified": "2025-06-22T11:59:06.854723Z", 2025-06-22 12:28:27.344852 | orchestrator | "created": "2025-06-22T11:59:06.854723Z", 2025-06-22 12:28:27.344865 | orchestrator | "min_mon_release": 18, 2025-06-22 12:28:27.344877 | orchestrator | "min_mon_release_name": "reef", 2025-06-22 12:28:27.344888 | orchestrator | "election_strategy": 1, 2025-06-22 12:28:27.344899 | orchestrator | "disallowed_leaders: ": "", 2025-06-22 12:28:27.344909 | orchestrator | "stretch_mode": false, 2025-06-22 12:28:27.344920 | orchestrator | "tiebreaker_mon": "", 2025-06-22 12:28:27.344931 | orchestrator | "removed_ranks: ": "", 2025-06-22 12:28:27.344942 | orchestrator | "features": { 2025-06-22 12:28:27.344953 | orchestrator | "persistent": [ 2025-06-22 12:28:27.344963 | orchestrator | "kraken", 2025-06-22 12:28:27.344974 | orchestrator | "luminous", 2025-06-22 12:28:27.344984 | orchestrator | "mimic", 2025-06-22 12:28:27.344995 | orchestrator | "osdmap-prune", 2025-06-22 12:28:27.345006 | orchestrator | "nautilus", 2025-06-22 12:28:27.345016 | orchestrator | "octopus", 2025-06-22 12:28:27.345027 | orchestrator | "pacific", 2025-06-22 12:28:27.345038 | orchestrator | "elector-pinging", 2025-06-22 12:28:27.345048 | orchestrator | "quincy", 2025-06-22 12:28:27.345059 | orchestrator | "reef" 2025-06-22 12:28:27.345070 | orchestrator | ], 2025-06-22 12:28:27.345081 | orchestrator | "optional": [] 2025-06-22 12:28:27.345092 | orchestrator | }, 2025-06-22 12:28:27.345103 | orchestrator | "mons": [ 2025-06-22 12:28:27.345114 | orchestrator | { 2025-06-22 12:28:27.345125 | orchestrator | "rank": 0, 2025-06-22 12:28:27.345138 | orchestrator | "name": "testbed-node-0", 2025-06-22 12:28:27.345157 | orchestrator | "public_addrs": { 2025-06-22 12:28:27.345176 | orchestrator | "addrvec": [ 2025-06-22 12:28:27.345194 | orchestrator | { 2025-06-22 12:28:27.345205 | orchestrator | "type": "v2", 2025-06-22 12:28:27.345216 | orchestrator | "addr": "192.168.16.10:3300", 2025-06-22 12:28:27.345227 | orchestrator | "nonce": 0 2025-06-22 12:28:27.345237 | orchestrator | }, 2025-06-22 12:28:27.345248 | orchestrator | { 2025-06-22 12:28:27.345259 | orchestrator | "type": "v1", 2025-06-22 12:28:27.345269 | orchestrator | "addr": "192.168.16.10:6789", 2025-06-22 12:28:27.345279 | orchestrator | "nonce": 0 2025-06-22 12:28:27.345290 | orchestrator | } 2025-06-22 12:28:27.345300 | orchestrator | ] 2025-06-22 12:28:27.345312 | orchestrator | }, 2025-06-22 12:28:27.345322 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-06-22 12:28:27.345358 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-06-22 12:28:27.345369 | orchestrator | "priority": 0, 2025-06-22 12:28:27.345379 | orchestrator | "weight": 0, 2025-06-22 12:28:27.345390 | orchestrator | "crush_location": "{}" 2025-06-22 12:28:27.345400 | orchestrator | }, 2025-06-22 12:28:27.345411 | orchestrator | { 2025-06-22 12:28:27.345421 | orchestrator | "rank": 1, 2025-06-22 12:28:27.345432 | orchestrator | "name": "testbed-node-1", 2025-06-22 12:28:27.345442 | orchestrator | "public_addrs": { 2025-06-22 12:28:27.345452 | orchestrator | "addrvec": [ 2025-06-22 12:28:27.345463 | orchestrator | { 2025-06-22 12:28:27.345474 | orchestrator | "type": "v2", 2025-06-22 12:28:27.345484 | orchestrator | "addr": "192.168.16.11:3300", 2025-06-22 12:28:27.345495 | orchestrator | "nonce": 0 2025-06-22 12:28:27.345505 | orchestrator | }, 2025-06-22 12:28:27.345516 | orchestrator | { 2025-06-22 12:28:27.345527 | orchestrator | "type": "v1", 2025-06-22 12:28:27.345538 | orchestrator | "addr": "192.168.16.11:6789", 2025-06-22 12:28:27.345548 | orchestrator | "nonce": 0 2025-06-22 12:28:27.345559 | orchestrator | } 2025-06-22 12:28:27.345570 | orchestrator | ] 2025-06-22 12:28:27.345580 | orchestrator | }, 2025-06-22 12:28:27.345627 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-06-22 12:28:27.345638 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-06-22 12:28:27.345649 | orchestrator | "priority": 0, 2025-06-22 12:28:27.345659 | orchestrator | "weight": 0, 2025-06-22 12:28:27.345670 | orchestrator | "crush_location": "{}" 2025-06-22 12:28:27.345681 | orchestrator | }, 2025-06-22 12:28:27.345691 | orchestrator | { 2025-06-22 12:28:27.345702 | orchestrator | "rank": 2, 2025-06-22 12:28:27.345712 | orchestrator | "name": "testbed-node-2", 2025-06-22 12:28:27.345723 | orchestrator | "public_addrs": { 2025-06-22 12:28:27.345733 | orchestrator | "addrvec": [ 2025-06-22 12:28:27.345744 | orchestrator | { 2025-06-22 12:28:27.345755 | orchestrator | "type": "v2", 2025-06-22 12:28:27.345765 | orchestrator | "addr": "192.168.16.12:3300", 2025-06-22 12:28:27.345776 | orchestrator | "nonce": 0 2025-06-22 12:28:27.345786 | orchestrator | }, 2025-06-22 12:28:27.345801 | orchestrator | { 2025-06-22 12:28:27.345821 | orchestrator | "type": "v1", 2025-06-22 12:28:27.345841 | orchestrator | "addr": "192.168.16.12:6789", 2025-06-22 12:28:27.345857 | orchestrator | "nonce": 0 2025-06-22 12:28:27.345867 | orchestrator | } 2025-06-22 12:28:27.345878 | orchestrator | ] 2025-06-22 12:28:27.345889 | orchestrator | }, 2025-06-22 12:28:27.345900 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-06-22 12:28:27.345910 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-06-22 12:28:27.345921 | orchestrator | "priority": 0, 2025-06-22 12:28:27.345931 | orchestrator | "weight": 0, 2025-06-22 12:28:27.345942 | orchestrator | "crush_location": "{}" 2025-06-22 12:28:27.345953 | orchestrator | } 2025-06-22 12:28:27.345963 | orchestrator | ] 2025-06-22 12:28:27.345974 | orchestrator | } 2025-06-22 12:28:27.345984 | orchestrator | } 2025-06-22 12:28:27.346007 | orchestrator | 2025-06-22 12:28:27.346080 | orchestrator | # Ceph free space status 2025-06-22 12:28:27.346094 | orchestrator | 2025-06-22 12:28:27.346105 | orchestrator | + echo 2025-06-22 12:28:27.346116 | orchestrator | + echo '# Ceph free space status' 2025-06-22 12:28:27.346127 | orchestrator | + echo 2025-06-22 12:28:27.346138 | orchestrator | + ceph df 2025-06-22 12:28:27.942077 | orchestrator | --- RAW STORAGE --- 2025-06-22 12:28:27.942184 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-06-22 12:28:27.942213 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-22 12:28:27.942225 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-22 12:28:27.942241 | orchestrator | 2025-06-22 12:28:27.942261 | orchestrator | --- POOLS --- 2025-06-22 12:28:27.942300 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-06-22 12:28:27.942320 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-06-22 12:28:27.942338 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-06-22 12:28:27.942355 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-06-22 12:28:27.942373 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-06-22 12:28:27.942390 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-06-22 12:28:27.942437 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-06-22 12:28:27.942458 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-06-22 12:28:27.942470 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-06-22 12:28:27.942481 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-06-22 12:28:27.942492 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-06-22 12:28:27.942503 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-06-22 12:28:27.942514 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.96 35 GiB 2025-06-22 12:28:27.942524 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-06-22 12:28:27.942535 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-06-22 12:28:27.989727 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-22 12:28:28.046071 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-22 12:28:28.046176 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-06-22 12:28:28.046193 | orchestrator | + osism apply facts 2025-06-22 12:28:29.759424 | orchestrator | Registering Redlock._acquired_script 2025-06-22 12:28:29.759532 | orchestrator | Registering Redlock._extend_script 2025-06-22 12:28:29.759549 | orchestrator | Registering Redlock._release_script 2025-06-22 12:28:29.817160 | orchestrator | 2025-06-22 12:28:29 | INFO  | Task 3a3bf340-c31f-472d-8433-fb43488840e8 (facts) was prepared for execution. 2025-06-22 12:28:29.817249 | orchestrator | 2025-06-22 12:28:29 | INFO  | It takes a moment until task 3a3bf340-c31f-472d-8433-fb43488840e8 (facts) has been started and output is visible here. 2025-06-22 12:28:34.000976 | orchestrator | 2025-06-22 12:28:34.002719 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-22 12:28:34.003670 | orchestrator | 2025-06-22 12:28:34.005936 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-22 12:28:34.006923 | orchestrator | Sunday 22 June 2025 12:28:33 +0000 (0:00:00.263) 0:00:00.263 *********** 2025-06-22 12:28:35.496183 | orchestrator | ok: [testbed-manager] 2025-06-22 12:28:35.498849 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:28:35.505842 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:28:35.505903 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:28:35.505916 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:28:35.505927 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:28:35.505937 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:28:35.505949 | orchestrator | 2025-06-22 12:28:35.505962 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-22 12:28:35.505975 | orchestrator | Sunday 22 June 2025 12:28:35 +0000 (0:00:01.491) 0:00:01.755 *********** 2025-06-22 12:28:35.676937 | orchestrator | skipping: [testbed-manager] 2025-06-22 12:28:35.765009 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:28:35.846558 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:28:35.925490 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:28:36.023101 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:28:36.768583 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:28:36.770005 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:28:36.770754 | orchestrator | 2025-06-22 12:28:36.772074 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 12:28:36.772918 | orchestrator | 2025-06-22 12:28:36.774199 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 12:28:36.774981 | orchestrator | Sunday 22 June 2025 12:28:36 +0000 (0:00:01.277) 0:00:03.032 *********** 2025-06-22 12:28:41.887566 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:28:41.889083 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:28:41.890406 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:28:41.891626 | orchestrator | ok: [testbed-manager] 2025-06-22 12:28:41.892685 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:28:41.894282 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:28:41.895537 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:28:41.896565 | orchestrator | 2025-06-22 12:28:41.897572 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-22 12:28:41.898274 | orchestrator | 2025-06-22 12:28:41.899208 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-22 12:28:41.899395 | orchestrator | Sunday 22 June 2025 12:28:41 +0000 (0:00:05.120) 0:00:08.153 *********** 2025-06-22 12:28:42.057294 | orchestrator | skipping: [testbed-manager] 2025-06-22 12:28:42.135845 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:28:42.216580 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:28:42.295903 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:28:42.380564 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:28:42.417291 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:28:42.417636 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:28:42.418240 | orchestrator | 2025-06-22 12:28:42.418887 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:28:42.419835 | orchestrator | 2025-06-22 12:28:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 12:28:42.420378 | orchestrator | 2025-06-22 12:28:42 | INFO  | Please wait and do not abort execution. 2025-06-22 12:28:42.420620 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 12:28:42.420978 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 12:28:42.421954 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 12:28:42.422797 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 12:28:42.423359 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 12:28:42.423886 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 12:28:42.424409 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 12:28:42.425129 | orchestrator | 2025-06-22 12:28:42.425575 | orchestrator | 2025-06-22 12:28:42.426447 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:28:42.426763 | orchestrator | Sunday 22 June 2025 12:28:42 +0000 (0:00:00.529) 0:00:08.682 *********** 2025-06-22 12:28:42.427225 | orchestrator | =============================================================================== 2025-06-22 12:28:42.427702 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.12s 2025-06-22 12:28:42.428196 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.49s 2025-06-22 12:28:42.428528 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.28s 2025-06-22 12:28:42.428981 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-06-22 12:28:43.083404 | orchestrator | + osism validate ceph-mons 2025-06-22 12:28:44.760524 | orchestrator | Registering Redlock._acquired_script 2025-06-22 12:28:44.760693 | orchestrator | Registering Redlock._extend_script 2025-06-22 12:28:44.760717 | orchestrator | Registering Redlock._release_script 2025-06-22 12:29:04.646083 | orchestrator | 2025-06-22 12:29:04.646201 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-06-22 12:29:04.646220 | orchestrator | 2025-06-22 12:29:04.646233 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-22 12:29:04.646245 | orchestrator | Sunday 22 June 2025 12:28:48 +0000 (0:00:00.423) 0:00:00.423 *********** 2025-06-22 12:29:04.646278 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 12:29:04.646290 | orchestrator | 2025-06-22 12:29:04.646301 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-22 12:29:04.646312 | orchestrator | Sunday 22 June 2025 12:28:49 +0000 (0:00:00.649) 0:00:01.073 *********** 2025-06-22 12:29:04.646323 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 12:29:04.646334 | orchestrator | 2025-06-22 12:29:04.646345 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-22 12:29:04.646356 | orchestrator | Sunday 22 June 2025 12:28:50 +0000 (0:00:00.828) 0:00:01.902 *********** 2025-06-22 12:29:04.646368 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:04.646380 | orchestrator | 2025-06-22 12:29:04.646391 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-22 12:29:04.646402 | orchestrator | Sunday 22 June 2025 12:28:50 +0000 (0:00:00.243) 0:00:02.145 *********** 2025-06-22 12:29:04.646413 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:04.646424 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:29:04.646435 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:29:04.646446 | orchestrator | 2025-06-22 12:29:04.646457 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-22 12:29:04.646468 | orchestrator | Sunday 22 June 2025 12:28:51 +0000 (0:00:00.305) 0:00:02.451 *********** 2025-06-22 12:29:04.646478 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:29:04.646489 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:29:04.646500 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:04.646511 | orchestrator | 2025-06-22 12:29:04.646521 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-22 12:29:04.646532 | orchestrator | Sunday 22 June 2025 12:28:52 +0000 (0:00:00.997) 0:00:03.448 *********** 2025-06-22 12:29:04.646543 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:04.646554 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:29:04.646564 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:29:04.646575 | orchestrator | 2025-06-22 12:29:04.646657 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-22 12:29:04.646673 | orchestrator | Sunday 22 June 2025 12:28:52 +0000 (0:00:00.316) 0:00:03.764 *********** 2025-06-22 12:29:04.646684 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:04.646695 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:29:04.646706 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:29:04.646717 | orchestrator | 2025-06-22 12:29:04.646727 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 12:29:04.646738 | orchestrator | Sunday 22 June 2025 12:28:52 +0000 (0:00:00.546) 0:00:04.310 *********** 2025-06-22 12:29:04.646749 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:04.646760 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:29:04.646771 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:29:04.646781 | orchestrator | 2025-06-22 12:29:04.646792 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-06-22 12:29:04.646803 | orchestrator | Sunday 22 June 2025 12:28:53 +0000 (0:00:00.326) 0:00:04.637 *********** 2025-06-22 12:29:04.646814 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:04.646825 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:29:04.646836 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:29:04.646846 | orchestrator | 2025-06-22 12:29:04.646857 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-06-22 12:29:04.646868 | orchestrator | Sunday 22 June 2025 12:28:53 +0000 (0:00:00.302) 0:00:04.940 *********** 2025-06-22 12:29:04.646879 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:04.646889 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:29:04.646900 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:29:04.646911 | orchestrator | 2025-06-22 12:29:04.646922 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 12:29:04.646932 | orchestrator | Sunday 22 June 2025 12:28:53 +0000 (0:00:00.299) 0:00:05.239 *********** 2025-06-22 12:29:04.646943 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:04.646963 | orchestrator | 2025-06-22 12:29:04.646974 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 12:29:04.646985 | orchestrator | Sunday 22 June 2025 12:28:54 +0000 (0:00:00.726) 0:00:05.966 *********** 2025-06-22 12:29:04.646996 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:04.647007 | orchestrator | 2025-06-22 12:29:04.647018 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 12:29:04.647028 | orchestrator | Sunday 22 June 2025 12:28:54 +0000 (0:00:00.277) 0:00:06.244 *********** 2025-06-22 12:29:04.647039 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:04.647050 | orchestrator | 2025-06-22 12:29:04.647061 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 12:29:04.647072 | orchestrator | Sunday 22 June 2025 12:28:55 +0000 (0:00:00.251) 0:00:06.495 *********** 2025-06-22 12:29:04.647083 | orchestrator | 2025-06-22 12:29:04.647094 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 12:29:04.647120 | orchestrator | Sunday 22 June 2025 12:28:55 +0000 (0:00:00.073) 0:00:06.568 *********** 2025-06-22 12:29:04.647132 | orchestrator | 2025-06-22 12:29:04.647142 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 12:29:04.647153 | orchestrator | Sunday 22 June 2025 12:28:55 +0000 (0:00:00.072) 0:00:06.641 *********** 2025-06-22 12:29:04.647164 | orchestrator | 2025-06-22 12:29:04.647175 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 12:29:04.647186 | orchestrator | Sunday 22 June 2025 12:28:55 +0000 (0:00:00.078) 0:00:06.719 *********** 2025-06-22 12:29:04.647196 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:04.647207 | orchestrator | 2025-06-22 12:29:04.647218 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-22 12:29:04.647229 | orchestrator | Sunday 22 June 2025 12:28:55 +0000 (0:00:00.261) 0:00:06.981 *********** 2025-06-22 12:29:04.647240 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:04.647251 | orchestrator | 2025-06-22 12:29:04.647281 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-06-22 12:29:04.647293 | orchestrator | Sunday 22 June 2025 12:28:55 +0000 (0:00:00.242) 0:00:07.224 *********** 2025-06-22 12:29:04.647304 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:04.647314 | orchestrator | 2025-06-22 12:29:04.647325 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-06-22 12:29:04.647336 | orchestrator | Sunday 22 June 2025 12:28:55 +0000 (0:00:00.114) 0:00:07.339 *********** 2025-06-22 12:29:04.647347 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:29:04.647357 | orchestrator | 2025-06-22 12:29:04.647368 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-06-22 12:29:04.647379 | orchestrator | Sunday 22 June 2025 12:28:57 +0000 (0:00:01.595) 0:00:08.934 *********** 2025-06-22 12:29:04.647389 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:04.647400 | orchestrator | 2025-06-22 12:29:04.647411 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-06-22 12:29:04.647421 | orchestrator | Sunday 22 June 2025 12:28:57 +0000 (0:00:00.333) 0:00:09.268 *********** 2025-06-22 12:29:04.647432 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:04.647443 | orchestrator | 2025-06-22 12:29:04.647453 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-06-22 12:29:04.647469 | orchestrator | Sunday 22 June 2025 12:28:58 +0000 (0:00:00.372) 0:00:09.640 *********** 2025-06-22 12:29:04.647480 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:04.647491 | orchestrator | 2025-06-22 12:29:04.647502 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-06-22 12:29:04.647512 | orchestrator | Sunday 22 June 2025 12:28:58 +0000 (0:00:00.317) 0:00:09.959 *********** 2025-06-22 12:29:04.647523 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:04.647534 | orchestrator | 2025-06-22 12:29:04.647544 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-06-22 12:29:04.647555 | orchestrator | Sunday 22 June 2025 12:28:58 +0000 (0:00:00.344) 0:00:10.303 *********** 2025-06-22 12:29:04.647671 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:04.647693 | orchestrator | 2025-06-22 12:29:04.647705 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-06-22 12:29:04.647716 | orchestrator | Sunday 22 June 2025 12:28:58 +0000 (0:00:00.128) 0:00:10.431 *********** 2025-06-22 12:29:04.647726 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:04.647738 | orchestrator | 2025-06-22 12:29:04.647748 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-06-22 12:29:04.647759 | orchestrator | Sunday 22 June 2025 12:28:59 +0000 (0:00:00.131) 0:00:10.563 *********** 2025-06-22 12:29:04.647770 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:04.647780 | orchestrator | 2025-06-22 12:29:04.647791 | orchestrator | TASK [Gather status data] ****************************************************** 2025-06-22 12:29:04.647802 | orchestrator | Sunday 22 June 2025 12:28:59 +0000 (0:00:00.127) 0:00:10.690 *********** 2025-06-22 12:29:04.647813 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:29:04.647823 | orchestrator | 2025-06-22 12:29:04.647834 | orchestrator | TASK [Set health test data] **************************************************** 2025-06-22 12:29:04.647845 | orchestrator | Sunday 22 June 2025 12:29:00 +0000 (0:00:01.472) 0:00:12.163 *********** 2025-06-22 12:29:04.647856 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:04.647866 | orchestrator | 2025-06-22 12:29:04.647877 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-06-22 12:29:04.647888 | orchestrator | Sunday 22 June 2025 12:29:01 +0000 (0:00:00.296) 0:00:12.459 *********** 2025-06-22 12:29:04.647899 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:04.647910 | orchestrator | 2025-06-22 12:29:04.647921 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-06-22 12:29:04.647932 | orchestrator | Sunday 22 June 2025 12:29:01 +0000 (0:00:00.130) 0:00:12.590 *********** 2025-06-22 12:29:04.647943 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:04.647953 | orchestrator | 2025-06-22 12:29:04.647964 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-06-22 12:29:04.647975 | orchestrator | Sunday 22 June 2025 12:29:01 +0000 (0:00:00.153) 0:00:12.744 *********** 2025-06-22 12:29:04.647986 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:04.647997 | orchestrator | 2025-06-22 12:29:04.648008 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-06-22 12:29:04.648019 | orchestrator | Sunday 22 June 2025 12:29:01 +0000 (0:00:00.145) 0:00:12.889 *********** 2025-06-22 12:29:04.648030 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:04.648041 | orchestrator | 2025-06-22 12:29:04.648052 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-22 12:29:04.648063 | orchestrator | Sunday 22 June 2025 12:29:01 +0000 (0:00:00.325) 0:00:13.215 *********** 2025-06-22 12:29:04.648074 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 12:29:04.648086 | orchestrator | 2025-06-22 12:29:04.648097 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-22 12:29:04.648108 | orchestrator | Sunday 22 June 2025 12:29:02 +0000 (0:00:00.274) 0:00:13.490 *********** 2025-06-22 12:29:04.648119 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:04.648130 | orchestrator | 2025-06-22 12:29:04.648141 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 12:29:04.648152 | orchestrator | Sunday 22 June 2025 12:29:02 +0000 (0:00:00.252) 0:00:13.742 *********** 2025-06-22 12:29:04.648163 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 12:29:04.648174 | orchestrator | 2025-06-22 12:29:04.648185 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 12:29:04.648197 | orchestrator | Sunday 22 June 2025 12:29:03 +0000 (0:00:01.601) 0:00:15.344 *********** 2025-06-22 12:29:04.648208 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 12:29:04.648219 | orchestrator | 2025-06-22 12:29:04.648230 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 12:29:04.648249 | orchestrator | Sunday 22 June 2025 12:29:04 +0000 (0:00:00.264) 0:00:15.609 *********** 2025-06-22 12:29:04.648260 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 12:29:04.648271 | orchestrator | 2025-06-22 12:29:04.648290 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 12:29:07.213098 | orchestrator | Sunday 22 June 2025 12:29:04 +0000 (0:00:00.247) 0:00:15.856 *********** 2025-06-22 12:29:07.213205 | orchestrator | 2025-06-22 12:29:07.213223 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 12:29:07.213236 | orchestrator | Sunday 22 June 2025 12:29:04 +0000 (0:00:00.068) 0:00:15.925 *********** 2025-06-22 12:29:07.213247 | orchestrator | 2025-06-22 12:29:07.213258 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 12:29:07.213269 | orchestrator | Sunday 22 June 2025 12:29:04 +0000 (0:00:00.068) 0:00:15.993 *********** 2025-06-22 12:29:07.213279 | orchestrator | 2025-06-22 12:29:07.213290 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-22 12:29:07.213301 | orchestrator | Sunday 22 June 2025 12:29:04 +0000 (0:00:00.071) 0:00:16.065 *********** 2025-06-22 12:29:07.213312 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 12:29:07.213323 | orchestrator | 2025-06-22 12:29:07.213333 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 12:29:07.213344 | orchestrator | Sunday 22 June 2025 12:29:06 +0000 (0:00:01.598) 0:00:17.664 *********** 2025-06-22 12:29:07.213355 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-22 12:29:07.213385 | orchestrator |  "msg": [ 2025-06-22 12:29:07.213398 | orchestrator |  "Validator run completed.", 2025-06-22 12:29:07.213409 | orchestrator |  "You can find the report file here:", 2025-06-22 12:29:07.213420 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-06-22T12:28:49+00:00-report.json", 2025-06-22 12:29:07.213432 | orchestrator |  "on the following host:", 2025-06-22 12:29:07.213443 | orchestrator |  "testbed-manager" 2025-06-22 12:29:07.213453 | orchestrator |  ] 2025-06-22 12:29:07.213464 | orchestrator | } 2025-06-22 12:29:07.213475 | orchestrator | 2025-06-22 12:29:07.213491 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:29:07.213503 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-06-22 12:29:07.213515 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 12:29:07.213526 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 12:29:07.213537 | orchestrator | 2025-06-22 12:29:07.213548 | orchestrator | 2025-06-22 12:29:07.213559 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:29:07.213569 | orchestrator | Sunday 22 June 2025 12:29:06 +0000 (0:00:00.672) 0:00:18.336 *********** 2025-06-22 12:29:07.213580 | orchestrator | =============================================================================== 2025-06-22 12:29:07.213590 | orchestrator | Aggregate test results step one ----------------------------------------- 1.60s 2025-06-22 12:29:07.213644 | orchestrator | Write report file ------------------------------------------------------- 1.60s 2025-06-22 12:29:07.213657 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.60s 2025-06-22 12:29:07.213668 | orchestrator | Gather status data ------------------------------------------------------ 1.47s 2025-06-22 12:29:07.213681 | orchestrator | Get container info ------------------------------------------------------ 1.00s 2025-06-22 12:29:07.213693 | orchestrator | Create report output directory ------------------------------------------ 0.83s 2025-06-22 12:29:07.213705 | orchestrator | Aggregate test results step one ----------------------------------------- 0.73s 2025-06-22 12:29:07.213739 | orchestrator | Print report file information ------------------------------------------- 0.67s 2025-06-22 12:29:07.213750 | orchestrator | Get timestamp for report file ------------------------------------------- 0.65s 2025-06-22 12:29:07.213761 | orchestrator | Set test result to passed if container is existing ---------------------- 0.55s 2025-06-22 12:29:07.213772 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.37s 2025-06-22 12:29:07.213782 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.34s 2025-06-22 12:29:07.213793 | orchestrator | Set quorum test data ---------------------------------------------------- 0.33s 2025-06-22 12:29:07.213804 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2025-06-22 12:29:07.213814 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.33s 2025-06-22 12:29:07.213825 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.32s 2025-06-22 12:29:07.213836 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2025-06-22 12:29:07.213846 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2025-06-22 12:29:07.213857 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.30s 2025-06-22 12:29:07.213868 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.30s 2025-06-22 12:29:07.481876 | orchestrator | + osism validate ceph-mgrs 2025-06-22 12:29:09.184978 | orchestrator | Registering Redlock._acquired_script 2025-06-22 12:29:09.185086 | orchestrator | Registering Redlock._extend_script 2025-06-22 12:29:09.185102 | orchestrator | Registering Redlock._release_script 2025-06-22 12:29:28.380477 | orchestrator | 2025-06-22 12:29:28.380586 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-06-22 12:29:28.380644 | orchestrator | 2025-06-22 12:29:28.380658 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-22 12:29:28.380670 | orchestrator | Sunday 22 June 2025 12:29:13 +0000 (0:00:00.425) 0:00:00.425 *********** 2025-06-22 12:29:28.380682 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 12:29:28.380693 | orchestrator | 2025-06-22 12:29:28.380704 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-22 12:29:28.380716 | orchestrator | Sunday 22 June 2025 12:29:14 +0000 (0:00:00.667) 0:00:01.093 *********** 2025-06-22 12:29:28.380727 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 12:29:28.380738 | orchestrator | 2025-06-22 12:29:28.380748 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-22 12:29:28.380759 | orchestrator | Sunday 22 June 2025 12:29:15 +0000 (0:00:00.891) 0:00:01.985 *********** 2025-06-22 12:29:28.380771 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:28.380783 | orchestrator | 2025-06-22 12:29:28.380794 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-22 12:29:28.380805 | orchestrator | Sunday 22 June 2025 12:29:15 +0000 (0:00:00.247) 0:00:02.233 *********** 2025-06-22 12:29:28.380815 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:28.380826 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:29:28.380837 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:29:28.380848 | orchestrator | 2025-06-22 12:29:28.380858 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-22 12:29:28.380869 | orchestrator | Sunday 22 June 2025 12:29:15 +0000 (0:00:00.299) 0:00:02.532 *********** 2025-06-22 12:29:28.380880 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:29:28.380891 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:29:28.380901 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:28.380912 | orchestrator | 2025-06-22 12:29:28.380923 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-22 12:29:28.380934 | orchestrator | Sunday 22 June 2025 12:29:16 +0000 (0:00:00.947) 0:00:03.480 *********** 2025-06-22 12:29:28.380945 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:28.380956 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:29:28.380986 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:29:28.380997 | orchestrator | 2025-06-22 12:29:28.381007 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-22 12:29:28.381018 | orchestrator | Sunday 22 June 2025 12:29:16 +0000 (0:00:00.284) 0:00:03.764 *********** 2025-06-22 12:29:28.381039 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:28.381050 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:29:28.381061 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:29:28.381072 | orchestrator | 2025-06-22 12:29:28.381083 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 12:29:28.381093 | orchestrator | Sunday 22 June 2025 12:29:17 +0000 (0:00:00.504) 0:00:04.268 *********** 2025-06-22 12:29:28.381104 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:28.381115 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:29:28.381126 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:29:28.381136 | orchestrator | 2025-06-22 12:29:28.381147 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-06-22 12:29:28.381159 | orchestrator | Sunday 22 June 2025 12:29:17 +0000 (0:00:00.320) 0:00:04.589 *********** 2025-06-22 12:29:28.381170 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:28.381181 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:29:28.381191 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:29:28.381202 | orchestrator | 2025-06-22 12:29:28.381213 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-06-22 12:29:28.381224 | orchestrator | Sunday 22 June 2025 12:29:18 +0000 (0:00:00.297) 0:00:04.886 *********** 2025-06-22 12:29:28.381234 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:28.381245 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:29:28.381255 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:29:28.381266 | orchestrator | 2025-06-22 12:29:28.381277 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 12:29:28.381288 | orchestrator | Sunday 22 June 2025 12:29:18 +0000 (0:00:00.286) 0:00:05.173 *********** 2025-06-22 12:29:28.381298 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:28.381309 | orchestrator | 2025-06-22 12:29:28.381320 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 12:29:28.381331 | orchestrator | Sunday 22 June 2025 12:29:19 +0000 (0:00:00.670) 0:00:05.844 *********** 2025-06-22 12:29:28.381341 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:28.381352 | orchestrator | 2025-06-22 12:29:28.381363 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 12:29:28.381373 | orchestrator | Sunday 22 June 2025 12:29:19 +0000 (0:00:00.256) 0:00:06.100 *********** 2025-06-22 12:29:28.381384 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:28.381395 | orchestrator | 2025-06-22 12:29:28.381406 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 12:29:28.381417 | orchestrator | Sunday 22 June 2025 12:29:19 +0000 (0:00:00.241) 0:00:06.342 *********** 2025-06-22 12:29:28.381427 | orchestrator | 2025-06-22 12:29:28.381438 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 12:29:28.381449 | orchestrator | Sunday 22 June 2025 12:29:19 +0000 (0:00:00.068) 0:00:06.411 *********** 2025-06-22 12:29:28.381459 | orchestrator | 2025-06-22 12:29:28.381470 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 12:29:28.381481 | orchestrator | Sunday 22 June 2025 12:29:19 +0000 (0:00:00.069) 0:00:06.480 *********** 2025-06-22 12:29:28.381491 | orchestrator | 2025-06-22 12:29:28.381502 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 12:29:28.381562 | orchestrator | Sunday 22 June 2025 12:29:19 +0000 (0:00:00.075) 0:00:06.556 *********** 2025-06-22 12:29:28.381573 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:28.381584 | orchestrator | 2025-06-22 12:29:28.381594 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-22 12:29:28.381625 | orchestrator | Sunday 22 June 2025 12:29:20 +0000 (0:00:00.240) 0:00:06.797 *********** 2025-06-22 12:29:28.381636 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:28.381654 | orchestrator | 2025-06-22 12:29:28.381682 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-06-22 12:29:28.381694 | orchestrator | Sunday 22 June 2025 12:29:20 +0000 (0:00:00.243) 0:00:07.040 *********** 2025-06-22 12:29:28.381704 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:28.381715 | orchestrator | 2025-06-22 12:29:28.381726 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-06-22 12:29:28.381736 | orchestrator | Sunday 22 June 2025 12:29:20 +0000 (0:00:00.121) 0:00:07.162 *********** 2025-06-22 12:29:28.381747 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:29:28.381758 | orchestrator | 2025-06-22 12:29:28.381769 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-06-22 12:29:28.381779 | orchestrator | Sunday 22 June 2025 12:29:22 +0000 (0:00:02.047) 0:00:09.209 *********** 2025-06-22 12:29:28.381790 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:28.381800 | orchestrator | 2025-06-22 12:29:28.381811 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-06-22 12:29:28.381822 | orchestrator | Sunday 22 June 2025 12:29:22 +0000 (0:00:00.251) 0:00:09.461 *********** 2025-06-22 12:29:28.381832 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:28.381843 | orchestrator | 2025-06-22 12:29:28.381854 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-06-22 12:29:28.381864 | orchestrator | Sunday 22 June 2025 12:29:23 +0000 (0:00:00.751) 0:00:10.212 *********** 2025-06-22 12:29:28.381875 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:28.381886 | orchestrator | 2025-06-22 12:29:28.381896 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-06-22 12:29:28.381907 | orchestrator | Sunday 22 June 2025 12:29:23 +0000 (0:00:00.144) 0:00:10.357 *********** 2025-06-22 12:29:28.381918 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:29:28.381928 | orchestrator | 2025-06-22 12:29:28.381943 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-22 12:29:28.381954 | orchestrator | Sunday 22 June 2025 12:29:23 +0000 (0:00:00.148) 0:00:10.505 *********** 2025-06-22 12:29:28.381965 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 12:29:28.381976 | orchestrator | 2025-06-22 12:29:28.381986 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-22 12:29:28.381997 | orchestrator | Sunday 22 June 2025 12:29:23 +0000 (0:00:00.238) 0:00:10.744 *********** 2025-06-22 12:29:28.382008 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:29:28.382093 | orchestrator | 2025-06-22 12:29:28.382108 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 12:29:28.382119 | orchestrator | Sunday 22 June 2025 12:29:24 +0000 (0:00:00.256) 0:00:11.001 *********** 2025-06-22 12:29:28.382130 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 12:29:28.382140 | orchestrator | 2025-06-22 12:29:28.382151 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 12:29:28.382162 | orchestrator | Sunday 22 June 2025 12:29:25 +0000 (0:00:01.242) 0:00:12.244 *********** 2025-06-22 12:29:28.382173 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 12:29:28.382183 | orchestrator | 2025-06-22 12:29:28.382194 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 12:29:28.382205 | orchestrator | Sunday 22 June 2025 12:29:25 +0000 (0:00:00.259) 0:00:12.504 *********** 2025-06-22 12:29:28.382215 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 12:29:28.382226 | orchestrator | 2025-06-22 12:29:28.382237 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 12:29:28.382247 | orchestrator | Sunday 22 June 2025 12:29:25 +0000 (0:00:00.243) 0:00:12.747 *********** 2025-06-22 12:29:28.382258 | orchestrator | 2025-06-22 12:29:28.382269 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 12:29:28.382279 | orchestrator | Sunday 22 June 2025 12:29:26 +0000 (0:00:00.066) 0:00:12.813 *********** 2025-06-22 12:29:28.382298 | orchestrator | 2025-06-22 12:29:28.382309 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 12:29:28.382319 | orchestrator | Sunday 22 June 2025 12:29:26 +0000 (0:00:00.067) 0:00:12.881 *********** 2025-06-22 12:29:28.382330 | orchestrator | 2025-06-22 12:29:28.382341 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-22 12:29:28.382352 | orchestrator | Sunday 22 June 2025 12:29:26 +0000 (0:00:00.069) 0:00:12.951 *********** 2025-06-22 12:29:28.382362 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 12:29:28.382373 | orchestrator | 2025-06-22 12:29:28.382383 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 12:29:28.382394 | orchestrator | Sunday 22 June 2025 12:29:27 +0000 (0:00:01.745) 0:00:14.696 *********** 2025-06-22 12:29:28.382405 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-22 12:29:28.382416 | orchestrator |  "msg": [ 2025-06-22 12:29:28.382427 | orchestrator |  "Validator run completed.", 2025-06-22 12:29:28.382438 | orchestrator |  "You can find the report file here:", 2025-06-22 12:29:28.382448 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-06-22T12:29:14+00:00-report.json", 2025-06-22 12:29:28.382460 | orchestrator |  "on the following host:", 2025-06-22 12:29:28.382471 | orchestrator |  "testbed-manager" 2025-06-22 12:29:28.382482 | orchestrator |  ] 2025-06-22 12:29:28.382493 | orchestrator | } 2025-06-22 12:29:28.382504 | orchestrator | 2025-06-22 12:29:28.382515 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:29:28.382527 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 12:29:28.382539 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 12:29:28.382560 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 12:29:28.699030 | orchestrator | 2025-06-22 12:29:28.699124 | orchestrator | 2025-06-22 12:29:28.699139 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:29:28.699154 | orchestrator | Sunday 22 June 2025 12:29:28 +0000 (0:00:00.442) 0:00:15.139 *********** 2025-06-22 12:29:28.699165 | orchestrator | =============================================================================== 2025-06-22 12:29:28.699176 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.05s 2025-06-22 12:29:28.699186 | orchestrator | Write report file ------------------------------------------------------- 1.75s 2025-06-22 12:29:28.699197 | orchestrator | Aggregate test results step one ----------------------------------------- 1.24s 2025-06-22 12:29:28.699208 | orchestrator | Get container info ------------------------------------------------------ 0.95s 2025-06-22 12:29:28.699218 | orchestrator | Create report output directory ------------------------------------------ 0.89s 2025-06-22 12:29:28.699229 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.75s 2025-06-22 12:29:28.699240 | orchestrator | Aggregate test results step one ----------------------------------------- 0.67s 2025-06-22 12:29:28.699250 | orchestrator | Get timestamp for report file ------------------------------------------- 0.67s 2025-06-22 12:29:28.699261 | orchestrator | Set test result to passed if container is existing ---------------------- 0.50s 2025-06-22 12:29:28.699272 | orchestrator | Print report file information ------------------------------------------- 0.44s 2025-06-22 12:29:28.699282 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2025-06-22 12:29:28.699293 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-06-22 12:29:28.699303 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.30s 2025-06-22 12:29:28.699314 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.29s 2025-06-22 12:29:28.699351 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2025-06-22 12:29:28.699362 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2025-06-22 12:29:28.699373 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2025-06-22 12:29:28.699383 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.26s 2025-06-22 12:29:28.699395 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.25s 2025-06-22 12:29:28.699406 | orchestrator | Define report vars ------------------------------------------------------ 0.25s 2025-06-22 12:29:28.932464 | orchestrator | + osism validate ceph-osds 2025-06-22 12:29:30.668137 | orchestrator | Registering Redlock._acquired_script 2025-06-22 12:29:30.669044 | orchestrator | Registering Redlock._extend_script 2025-06-22 12:29:30.669073 | orchestrator | Registering Redlock._release_script 2025-06-22 12:29:39.563582 | orchestrator | 2025-06-22 12:29:39.563732 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-06-22 12:29:39.563748 | orchestrator | 2025-06-22 12:29:39.563761 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-22 12:29:39.563773 | orchestrator | Sunday 22 June 2025 12:29:35 +0000 (0:00:00.432) 0:00:00.432 *********** 2025-06-22 12:29:39.563785 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 12:29:39.563796 | orchestrator | 2025-06-22 12:29:39.563807 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 12:29:39.563818 | orchestrator | Sunday 22 June 2025 12:29:35 +0000 (0:00:00.624) 0:00:01.056 *********** 2025-06-22 12:29:39.563829 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 12:29:39.563840 | orchestrator | 2025-06-22 12:29:39.563851 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-22 12:29:39.563862 | orchestrator | Sunday 22 June 2025 12:29:36 +0000 (0:00:00.414) 0:00:01.471 *********** 2025-06-22 12:29:39.563873 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 12:29:39.563884 | orchestrator | 2025-06-22 12:29:39.563895 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-22 12:29:39.563906 | orchestrator | Sunday 22 June 2025 12:29:37 +0000 (0:00:01.012) 0:00:02.483 *********** 2025-06-22 12:29:39.563918 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:29:39.563930 | orchestrator | 2025-06-22 12:29:39.563941 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-22 12:29:39.563952 | orchestrator | Sunday 22 June 2025 12:29:37 +0000 (0:00:00.125) 0:00:02.609 *********** 2025-06-22 12:29:39.563963 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:29:39.563975 | orchestrator | 2025-06-22 12:29:39.563986 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-22 12:29:39.563997 | orchestrator | Sunday 22 June 2025 12:29:37 +0000 (0:00:00.150) 0:00:02.759 *********** 2025-06-22 12:29:39.564008 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:29:39.564020 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:29:39.564031 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:29:39.564042 | orchestrator | 2025-06-22 12:29:39.564053 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-22 12:29:39.564064 | orchestrator | Sunday 22 June 2025 12:29:37 +0000 (0:00:00.309) 0:00:03.069 *********** 2025-06-22 12:29:39.564075 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:29:39.564086 | orchestrator | 2025-06-22 12:29:39.564098 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-22 12:29:39.564109 | orchestrator | Sunday 22 June 2025 12:29:37 +0000 (0:00:00.150) 0:00:03.220 *********** 2025-06-22 12:29:39.564120 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:29:39.564132 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:29:39.564144 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:29:39.564157 | orchestrator | 2025-06-22 12:29:39.564168 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-06-22 12:29:39.564201 | orchestrator | Sunday 22 June 2025 12:29:38 +0000 (0:00:00.310) 0:00:03.531 *********** 2025-06-22 12:29:39.564214 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:29:39.564226 | orchestrator | 2025-06-22 12:29:39.564238 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 12:29:39.564250 | orchestrator | Sunday 22 June 2025 12:29:38 +0000 (0:00:00.603) 0:00:04.134 *********** 2025-06-22 12:29:39.564262 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:29:39.564274 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:29:39.564285 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:29:39.564296 | orchestrator | 2025-06-22 12:29:39.564307 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-06-22 12:29:39.564318 | orchestrator | Sunday 22 June 2025 12:29:39 +0000 (0:00:00.517) 0:00:04.652 *********** 2025-06-22 12:29:39.564332 | orchestrator | skipping: [testbed-node-3] => (item={'id': '51a8004419fbc605de27e10b188c89bac70549f2e89b86850299f5a2a7a52d2a', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-22 12:29:39.564363 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0585df077df0f5dbb88341854ea0c0c87c31f1f606c94880c8cb7b0fd7b91b36', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-22 12:29:39.564379 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'def263a75022947bd181f2a01f6b5415d36946647a7b0833216f3da40d782075', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-22 12:29:39.564392 | orchestrator | skipping: [testbed-node-3] => (item={'id': '839fcacd6b052cbc7a5a387ea3edfb39cec3ad3f1a1f33f50b954598fb35df2f', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-22 12:29:39.564404 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd7f29601c3b8c5023b8e9a4a0decd7e03edc60ae79f67f2406a8e8aa31a98bf5', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-22 12:29:39.564433 | orchestrator | skipping: [testbed-node-3] => (item={'id': '521b3ed2ae499e3e8815e0f1edd41f63c3272883cadd529d68294cfdfab843c6', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-22 12:29:39.564447 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cb9beae604d0dcc5bc8e6446b931440cd3e15a84bc820a13e13777de57f02210', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-22 12:29:39.564468 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8a818fbc3685abfe4927d9668a5b70141f9538f75c77055515f27faa59373306', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-22 12:29:39.564480 | orchestrator | skipping: [testbed-node-3] => (item={'id': '57fc1fab39f7a681b51ef74245007f7ebb549e9680c847eedc78b1a522e5121a', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-06-22 12:29:39.564491 | orchestrator | skipping: [testbed-node-3] => (item={'id': '509ed875a973ea9a7bea15fe1b0b129b1a28837715a408c45658d402eac373c2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-22 12:29:39.564503 | orchestrator | skipping: [testbed-node-3] => (item={'id': '781dd242dda289f7213abbb5b674edc6bc484abfcade53f6be165919875f7100', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-22 12:29:39.564524 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8d72af40b08ba420f18c40580d09e92b8d5864d215eff2184eee05bf16ff205c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-06-22 12:29:39.564537 | orchestrator | ok: [testbed-node-3] => (item={'id': '02586c65f4a538565874e0ab042ca84a3356ea210ea383d480190e517b2016a2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-22 12:29:39.564549 | orchestrator | ok: [testbed-node-3] => (item={'id': '089af3efec0fa1d681701f4e75bdd229fbf9affdeb1871af075b1c9128ad1837', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-22 12:29:39.564560 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b5a590d60164b80834f409324782f0207cba7e2b3102948f7df67f572f414ddc', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-22 12:29:39.564571 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b590b96c2a3e154f33a0279f9e59885e95d794e61bb33f5cd9e281cbefc9a326', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-22 12:29:39.564583 | orchestrator | skipping: [testbed-node-3] => (item={'id': '36f9c8f05fcf5c54a3cb6556ccda6334fdeeacbce765763d719cd029fefd4e3b', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-22 12:29:39.564616 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5a2aefb9582dd58ebb063e744a4b1cdacd692892ae5415310c4b67a2b8443e1d', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 12:29:39.564629 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7d2943266d299ff1051b4d54a2b3b8fa60c980ba303b71e4925e267f84083c70', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 12:29:39.564640 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7ecb3860de3ee30faf4dfa3075694a127d6aac79c799d2b529a6148d8e75c30a', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-22 12:29:39.564657 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c77189091843716ad8f171672c45a126902ffa56f34e675302cf3195f412eafa', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-22 12:29:39.724280 | orchestrator | skipping: [testbed-node-4] => (item={'id': '21cceafc2056c2f8c594438ca3ab437feb6dc4d8ccf9e7dd95ee7ab4aef5c83f', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-22 12:29:39.724389 | orchestrator | skipping: [testbed-node-4] => (item={'id': '545d428239928a87dd40166321529355b7ab43e229ab4313d1034964b6303c02', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-22 12:29:39.724411 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f8db50770507a56b69c7a0ea5ce50866d759034d6113b8d8ef1408fc0eef3158', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-22 12:29:39.724451 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f09b2cb3ef9c266c7ce0377fb1bf6af65a7dff4d761d7d30e8fe7d40898dea20', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-22 12:29:39.724467 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'aaa4867fb485980c84b7cb466a833ac7f4d452a015414f41cc6deac2d6a3698b', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-22 12:29:39.724483 | orchestrator | skipping: [testbed-node-4] => (item={'id': '34fd59c88a28732e003dfac5a53e24e08bd7d809da9b1c820372dac2c065fa0a', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-22 12:29:39.724502 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1dad49e6106a191ef199a816162dbe1948db6ffba04c43c8b3b7005e5bdb3eaa', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-22 12:29:39.724516 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9cc47b3669064318a7f00aab0c2e71c69d0ffb59261a383484f01c30d2f6807e', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-06-22 12:29:39.724530 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bcc17dfe6161efebdd8491bc462f1ee398285a8844911b1f069a96988fd07fd6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-22 12:29:39.724545 | orchestrator | skipping: [testbed-node-4] => (item={'id': '51c9b975b51879dfe41196ce3aa66433b61c96c85ed6c3c51c8fa5b01b7a431c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-22 12:29:39.724560 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6a9cc65f84dbb1f76cf3859a6c7b0ab0ee625a91cfa8096fba9efe348f6ba521', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-06-22 12:29:39.724591 | orchestrator | ok: [testbed-node-4] => (item={'id': 'f2c705bcee03a77749c3f623900c484324e605dbe5aede591f19906ef048e26a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-22 12:29:39.724689 | orchestrator | ok: [testbed-node-4] => (item={'id': '2df014c9a11cd6dc6860972b0d1c61d4c8900e06e3e8edc3ee6d26da9e82eae1', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-22 12:29:39.724705 | orchestrator | skipping: [testbed-node-4] => (item={'id': '816ed3fcc541965228960b1bce9b6f27a164e1494dd41555fa4f9febe5923ded', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-22 12:29:39.724741 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b20278cf4589eddf85375bf4de3ea23a6d8476e3a390f99d17397b6a311eab6e', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-22 12:29:39.724756 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a62bacbc910a3abcbcdb2bf6b6c7e2ff0c2202304356c0a572d08b7a1c38790a', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-22 12:29:39.724771 | orchestrator | skipping: [testbed-node-4] => (item={'id': '069efbb529e125184188545639d83ee1d2d664a9de740e56870b6f920a3c078b', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 12:29:39.724799 | orchestrator | skipping: [testbed-node-4] => (item={'id': '05edf9f703b5196fb929fc740c5de711aa2db3e727ae424195e02aa09660ade6', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 12:29:39.724815 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f1c64f4f98b4fecfd4d5afaee2d2edf70ed251820e8f81afc7aabd1f55e72a35', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-22 12:29:39.724830 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'acc5f5bc8f5ae6b3e076c328cb151a4d5ee71ccb388892720904f80537cf25c1', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-22 12:29:39.724845 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b22ef592ee4d8e2cf79e0ab7e17cad5c50b0f0f69ab7b0bc8cf8a84dd482ec51', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-22 12:29:39.724860 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e5c7b3ae8806d10343e9e71b4aa1128765bcff3499df90f1e43e463c7cc656f3', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-22 12:29:39.724875 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f910b7763e43a27f6c90bf5c48ffaef7215101ecd9018627be566efff7be173b', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-22 12:29:39.724890 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9aab356bf3c66b0dfdcf537bf5a66c2c69644e290d4c818654447f726c4a2eba', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-22 12:29:39.724905 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f68e0d0b78863733aa3ba99be05e07bfcc1784ec52ec4a3e0f7d50841de300b7', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-22 12:29:39.724921 | orchestrator | skipping: [testbed-node-5] => (item={'id': '48310b682811271d2129ac4a8e527e176ae135e7a5a982d272d12272c88fc233', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-22 12:29:39.724938 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e6730df7955c5827d1632889bc31326f1ec9cc807cc43c47ddafd7401e5fc02b', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-22 12:29:39.724963 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9885f34341b91d4aa03f845b8877c27a912104a0699bde9950b5517f9afa7f69', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-06-22 12:29:39.724979 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4e7afbe5c982dc43e9020c11174650dd80112a12ad75abfb9af29d265916bebe', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-22 12:29:39.725009 | orchestrator | skipping: [testbed-node-5] => (item={'id': '41dbd9a6c2e6a53bcaa2bab3bddda88c6366fd73937ca47116689e9687289de8', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-22 12:29:48.039246 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f06ca8f33c917684f293b8fdaad88f282dd06d438a56330055345e649195687a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-06-22 12:29:48.039360 | orchestrator | ok: [testbed-node-5] => (item={'id': '3a8131c59e3b403c199208709f7fe18fc13800789c290174ff6a951abc6eef93', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-22 12:29:48.039377 | orchestrator | ok: [testbed-node-5] => (item={'id': 'b42231182d7f8b8d3553b0122fc64fc4a4bc63e26cdab38b075d7ffbe12eeb0d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-22 12:29:48.039390 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ae1efdcccc5d8e1b0a5022a4124deb81aa3d32783fe3ec4ec0488567a62e1ebd', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-22 12:29:48.039402 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f7f5959934686fb7f2248c88dfdb6323b2da9b9760f1e599a261b1a277f68743', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-22 12:29:48.039416 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6491ed2973016b7c3e75c285395a856ccf786d00f19a1a962e8768b035b67e96', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-22 12:29:48.039427 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'eb16a664f2bcfe4120baa5834b33899ec6aed8701450f4cea3c46f463dbd05a1', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 12:29:48.039439 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd25b1370421815113203baed36b7f332d852d007e3004127b37be79c99701f23', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 12:29:48.039460 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a2ff7f5e3d5dfe21fe011f1e06c7a6292e8db446c25955f36106f657ccb96ee4', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-22 12:29:48.039481 | orchestrator | 2025-06-22 12:29:48.039502 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-06-22 12:29:48.039522 | orchestrator | Sunday 22 June 2025 12:29:39 +0000 (0:00:00.496) 0:00:05.148 *********** 2025-06-22 12:29:48.039541 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:29:48.039558 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:29:48.039577 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:29:48.039595 | orchestrator | 2025-06-22 12:29:48.039675 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-06-22 12:29:48.039695 | orchestrator | Sunday 22 June 2025 12:29:40 +0000 (0:00:00.307) 0:00:05.456 *********** 2025-06-22 12:29:48.039714 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:29:48.039733 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:29:48.039744 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:29:48.039755 | orchestrator | 2025-06-22 12:29:48.039784 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-06-22 12:29:48.039798 | orchestrator | Sunday 22 June 2025 12:29:40 +0000 (0:00:00.482) 0:00:05.939 *********** 2025-06-22 12:29:48.039810 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:29:48.039822 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:29:48.039836 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:29:48.039867 | orchestrator | 2025-06-22 12:29:48.039880 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 12:29:48.039893 | orchestrator | Sunday 22 June 2025 12:29:40 +0000 (0:00:00.319) 0:00:06.258 *********** 2025-06-22 12:29:48.039905 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:29:48.039917 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:29:48.039929 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:29:48.039941 | orchestrator | 2025-06-22 12:29:48.039954 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-06-22 12:29:48.039966 | orchestrator | Sunday 22 June 2025 12:29:41 +0000 (0:00:00.312) 0:00:06.570 *********** 2025-06-22 12:29:48.039979 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-06-22 12:29:48.039993 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-06-22 12:29:48.040005 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:29:48.040018 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-06-22 12:29:48.040031 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-06-22 12:29:48.040062 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:29:48.040075 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-06-22 12:29:48.040088 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-06-22 12:29:48.040100 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:29:48.040113 | orchestrator | 2025-06-22 12:29:48.040126 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-06-22 12:29:48.040137 | orchestrator | Sunday 22 June 2025 12:29:41 +0000 (0:00:00.329) 0:00:06.900 *********** 2025-06-22 12:29:48.040148 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:29:48.040159 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:29:48.040170 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:29:48.040180 | orchestrator | 2025-06-22 12:29:48.040191 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-22 12:29:48.040202 | orchestrator | Sunday 22 June 2025 12:29:41 +0000 (0:00:00.503) 0:00:07.403 *********** 2025-06-22 12:29:48.040212 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:29:48.040223 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:29:48.040234 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:29:48.040244 | orchestrator | 2025-06-22 12:29:48.040255 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-22 12:29:48.040265 | orchestrator | Sunday 22 June 2025 12:29:42 +0000 (0:00:00.283) 0:00:07.687 *********** 2025-06-22 12:29:48.040276 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:29:48.040287 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:29:48.040298 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:29:48.040308 | orchestrator | 2025-06-22 12:29:48.040319 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-06-22 12:29:48.040330 | orchestrator | Sunday 22 June 2025 12:29:42 +0000 (0:00:00.290) 0:00:07.978 *********** 2025-06-22 12:29:48.040341 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:29:48.040351 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:29:48.040362 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:29:48.040373 | orchestrator | 2025-06-22 12:29:48.040383 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 12:29:48.040394 | orchestrator | Sunday 22 June 2025 12:29:42 +0000 (0:00:00.293) 0:00:08.271 *********** 2025-06-22 12:29:48.040445 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:29:48.040457 | orchestrator | 2025-06-22 12:29:48.040468 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 12:29:48.040478 | orchestrator | Sunday 22 June 2025 12:29:43 +0000 (0:00:00.753) 0:00:09.025 *********** 2025-06-22 12:29:48.040489 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:29:48.040514 | orchestrator | 2025-06-22 12:29:48.040533 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 12:29:48.040552 | orchestrator | Sunday 22 June 2025 12:29:43 +0000 (0:00:00.251) 0:00:09.276 *********** 2025-06-22 12:29:48.040570 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:29:48.040586 | orchestrator | 2025-06-22 12:29:48.040631 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 12:29:48.040651 | orchestrator | Sunday 22 June 2025 12:29:44 +0000 (0:00:00.263) 0:00:09.539 *********** 2025-06-22 12:29:48.040669 | orchestrator | 2025-06-22 12:29:48.040687 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 12:29:48.040705 | orchestrator | Sunday 22 June 2025 12:29:44 +0000 (0:00:00.068) 0:00:09.608 *********** 2025-06-22 12:29:48.040722 | orchestrator | 2025-06-22 12:29:48.040740 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 12:29:48.040758 | orchestrator | Sunday 22 June 2025 12:29:44 +0000 (0:00:00.068) 0:00:09.677 *********** 2025-06-22 12:29:48.040777 | orchestrator | 2025-06-22 12:29:48.040795 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 12:29:48.040815 | orchestrator | Sunday 22 June 2025 12:29:44 +0000 (0:00:00.068) 0:00:09.746 *********** 2025-06-22 12:29:48.040834 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:29:48.040852 | orchestrator | 2025-06-22 12:29:48.040870 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-06-22 12:29:48.040890 | orchestrator | Sunday 22 June 2025 12:29:44 +0000 (0:00:00.246) 0:00:09.993 *********** 2025-06-22 12:29:48.040908 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:29:48.040926 | orchestrator | 2025-06-22 12:29:48.040946 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 12:29:48.040966 | orchestrator | Sunday 22 June 2025 12:29:44 +0000 (0:00:00.264) 0:00:10.257 *********** 2025-06-22 12:29:48.040996 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:29:48.041011 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:29:48.041022 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:29:48.041033 | orchestrator | 2025-06-22 12:29:48.041044 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-06-22 12:29:48.041055 | orchestrator | Sunday 22 June 2025 12:29:45 +0000 (0:00:00.293) 0:00:10.551 *********** 2025-06-22 12:29:48.041066 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:29:48.041077 | orchestrator | 2025-06-22 12:29:48.041088 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-06-22 12:29:48.041098 | orchestrator | Sunday 22 June 2025 12:29:45 +0000 (0:00:00.769) 0:00:11.320 *********** 2025-06-22 12:29:48.041109 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 12:29:48.041120 | orchestrator | 2025-06-22 12:29:48.041131 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-06-22 12:29:48.041141 | orchestrator | Sunday 22 June 2025 12:29:47 +0000 (0:00:01.585) 0:00:12.906 *********** 2025-06-22 12:29:48.041152 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:29:48.041163 | orchestrator | 2025-06-22 12:29:48.041173 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-06-22 12:29:48.041184 | orchestrator | Sunday 22 June 2025 12:29:47 +0000 (0:00:00.128) 0:00:13.034 *********** 2025-06-22 12:29:48.041195 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:29:48.041206 | orchestrator | 2025-06-22 12:29:48.041217 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-06-22 12:29:48.041227 | orchestrator | Sunday 22 June 2025 12:29:47 +0000 (0:00:00.306) 0:00:13.341 *********** 2025-06-22 12:29:48.041250 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:30:00.763404 | orchestrator | 2025-06-22 12:30:00.763539 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-06-22 12:30:00.763558 | orchestrator | Sunday 22 June 2025 12:29:48 +0000 (0:00:00.122) 0:00:13.463 *********** 2025-06-22 12:30:00.763571 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:30:00.763583 | orchestrator | 2025-06-22 12:30:00.763678 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 12:30:00.763699 | orchestrator | Sunday 22 June 2025 12:29:48 +0000 (0:00:00.127) 0:00:13.591 *********** 2025-06-22 12:30:00.763718 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:30:00.763737 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:30:00.763755 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:30:00.763775 | orchestrator | 2025-06-22 12:30:00.763788 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-06-22 12:30:00.763799 | orchestrator | Sunday 22 June 2025 12:29:48 +0000 (0:00:00.304) 0:00:13.895 *********** 2025-06-22 12:30:00.763810 | orchestrator | changed: [testbed-node-3] 2025-06-22 12:30:00.763822 | orchestrator | changed: [testbed-node-4] 2025-06-22 12:30:00.763833 | orchestrator | changed: [testbed-node-5] 2025-06-22 12:30:00.763844 | orchestrator | 2025-06-22 12:30:00.763854 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-06-22 12:30:00.763865 | orchestrator | Sunday 22 June 2025 12:29:51 +0000 (0:00:02.646) 0:00:16.542 *********** 2025-06-22 12:30:00.763876 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:30:00.763887 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:30:00.763898 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:30:00.763910 | orchestrator | 2025-06-22 12:30:00.763922 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-06-22 12:30:00.763935 | orchestrator | Sunday 22 June 2025 12:29:51 +0000 (0:00:00.317) 0:00:16.859 *********** 2025-06-22 12:30:00.763948 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:30:00.763960 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:30:00.763973 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:30:00.763985 | orchestrator | 2025-06-22 12:30:00.763996 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-06-22 12:30:00.764007 | orchestrator | Sunday 22 June 2025 12:29:51 +0000 (0:00:00.507) 0:00:17.367 *********** 2025-06-22 12:30:00.764017 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:30:00.764028 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:30:00.764039 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:30:00.764049 | orchestrator | 2025-06-22 12:30:00.764060 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-06-22 12:30:00.764071 | orchestrator | Sunday 22 June 2025 12:29:52 +0000 (0:00:00.304) 0:00:17.672 *********** 2025-06-22 12:30:00.764082 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:30:00.764092 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:30:00.764103 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:30:00.764114 | orchestrator | 2025-06-22 12:30:00.764124 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-06-22 12:30:00.764135 | orchestrator | Sunday 22 June 2025 12:29:52 +0000 (0:00:00.569) 0:00:18.242 *********** 2025-06-22 12:30:00.764146 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:30:00.764157 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:30:00.764167 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:30:00.764178 | orchestrator | 2025-06-22 12:30:00.764188 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-06-22 12:30:00.764199 | orchestrator | Sunday 22 June 2025 12:29:53 +0000 (0:00:00.352) 0:00:18.594 *********** 2025-06-22 12:30:00.764210 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:30:00.764220 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:30:00.764231 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:30:00.764242 | orchestrator | 2025-06-22 12:30:00.764253 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 12:30:00.764264 | orchestrator | Sunday 22 June 2025 12:29:53 +0000 (0:00:00.298) 0:00:18.892 *********** 2025-06-22 12:30:00.764274 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:30:00.764285 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:30:00.764301 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:30:00.764319 | orchestrator | 2025-06-22 12:30:00.764338 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-06-22 12:30:00.764370 | orchestrator | Sunday 22 June 2025 12:29:53 +0000 (0:00:00.513) 0:00:19.406 *********** 2025-06-22 12:30:00.764390 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:30:00.764408 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:30:00.764426 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:30:00.764446 | orchestrator | 2025-06-22 12:30:00.764465 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-06-22 12:30:00.764483 | orchestrator | Sunday 22 June 2025 12:29:54 +0000 (0:00:00.775) 0:00:20.181 *********** 2025-06-22 12:30:00.764502 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:30:00.764521 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:30:00.764539 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:30:00.764557 | orchestrator | 2025-06-22 12:30:00.764576 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-06-22 12:30:00.764597 | orchestrator | Sunday 22 June 2025 12:29:55 +0000 (0:00:00.311) 0:00:20.493 *********** 2025-06-22 12:30:00.764643 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:30:00.764662 | orchestrator | skipping: [testbed-node-4] 2025-06-22 12:30:00.764681 | orchestrator | skipping: [testbed-node-5] 2025-06-22 12:30:00.764699 | orchestrator | 2025-06-22 12:30:00.764718 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-06-22 12:30:00.764731 | orchestrator | Sunday 22 June 2025 12:29:55 +0000 (0:00:00.392) 0:00:20.885 *********** 2025-06-22 12:30:00.764743 | orchestrator | ok: [testbed-node-3] 2025-06-22 12:30:00.764753 | orchestrator | ok: [testbed-node-4] 2025-06-22 12:30:00.764764 | orchestrator | ok: [testbed-node-5] 2025-06-22 12:30:00.764775 | orchestrator | 2025-06-22 12:30:00.764785 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-22 12:30:00.764796 | orchestrator | Sunday 22 June 2025 12:29:56 +0000 (0:00:00.545) 0:00:21.431 *********** 2025-06-22 12:30:00.764807 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 12:30:00.764818 | orchestrator | 2025-06-22 12:30:00.764829 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-22 12:30:00.764840 | orchestrator | Sunday 22 June 2025 12:29:56 +0000 (0:00:00.261) 0:00:21.692 *********** 2025-06-22 12:30:00.764851 | orchestrator | skipping: [testbed-node-3] 2025-06-22 12:30:00.764861 | orchestrator | 2025-06-22 12:30:00.764893 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 12:30:00.764904 | orchestrator | Sunday 22 June 2025 12:29:56 +0000 (0:00:00.258) 0:00:21.950 *********** 2025-06-22 12:30:00.764915 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 12:30:00.764925 | orchestrator | 2025-06-22 12:30:00.764936 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 12:30:00.764947 | orchestrator | Sunday 22 June 2025 12:29:58 +0000 (0:00:01.551) 0:00:23.501 *********** 2025-06-22 12:30:00.764958 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 12:30:00.764969 | orchestrator | 2025-06-22 12:30:00.764979 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 12:30:00.764990 | orchestrator | Sunday 22 June 2025 12:29:58 +0000 (0:00:00.268) 0:00:23.770 *********** 2025-06-22 12:30:00.765000 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 12:30:00.765011 | orchestrator | 2025-06-22 12:30:00.765022 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 12:30:00.765032 | orchestrator | Sunday 22 June 2025 12:29:58 +0000 (0:00:00.260) 0:00:24.031 *********** 2025-06-22 12:30:00.765043 | orchestrator | 2025-06-22 12:30:00.765054 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 12:30:00.765064 | orchestrator | Sunday 22 June 2025 12:29:58 +0000 (0:00:00.067) 0:00:24.098 *********** 2025-06-22 12:30:00.765075 | orchestrator | 2025-06-22 12:30:00.765086 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 12:30:00.765103 | orchestrator | Sunday 22 June 2025 12:29:58 +0000 (0:00:00.066) 0:00:24.165 *********** 2025-06-22 12:30:00.765121 | orchestrator | 2025-06-22 12:30:00.765139 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-22 12:30:00.765172 | orchestrator | Sunday 22 June 2025 12:29:58 +0000 (0:00:00.070) 0:00:24.236 *********** 2025-06-22 12:30:00.765185 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 12:30:00.765205 | orchestrator | 2025-06-22 12:30:00.765222 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 12:30:00.765240 | orchestrator | Sunday 22 June 2025 12:30:00 +0000 (0:00:01.287) 0:00:25.523 *********** 2025-06-22 12:30:00.765259 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-06-22 12:30:00.765279 | orchestrator |  "msg": [ 2025-06-22 12:30:00.765348 | orchestrator |  "Validator run completed.", 2025-06-22 12:30:00.765362 | orchestrator |  "You can find the report file here:", 2025-06-22 12:30:00.765373 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-06-22T12:29:35+00:00-report.json", 2025-06-22 12:30:00.765388 | orchestrator |  "on the following host:", 2025-06-22 12:30:00.765405 | orchestrator |  "testbed-manager" 2025-06-22 12:30:00.765423 | orchestrator |  ] 2025-06-22 12:30:00.765442 | orchestrator | } 2025-06-22 12:30:00.765461 | orchestrator | 2025-06-22 12:30:00.765479 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:30:00.765497 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-06-22 12:30:00.765516 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 12:30:00.765534 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 12:30:00.765552 | orchestrator | 2025-06-22 12:30:00.765570 | orchestrator | 2025-06-22 12:30:00.765589 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:30:00.765637 | orchestrator | Sunday 22 June 2025 12:30:00 +0000 (0:00:00.634) 0:00:26.157 *********** 2025-06-22 12:30:00.765658 | orchestrator | =============================================================================== 2025-06-22 12:30:00.765677 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.65s 2025-06-22 12:30:00.765697 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.59s 2025-06-22 12:30:00.765726 | orchestrator | Aggregate test results step one ----------------------------------------- 1.55s 2025-06-22 12:30:00.765746 | orchestrator | Write report file ------------------------------------------------------- 1.29s 2025-06-22 12:30:00.765764 | orchestrator | Create report output directory ------------------------------------------ 1.01s 2025-06-22 12:30:00.765783 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.78s 2025-06-22 12:30:00.765802 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.77s 2025-06-22 12:30:00.765821 | orchestrator | Aggregate test results step one ----------------------------------------- 0.75s 2025-06-22 12:30:00.765840 | orchestrator | Print report file information ------------------------------------------- 0.63s 2025-06-22 12:30:00.765858 | orchestrator | Get timestamp for report file ------------------------------------------- 0.62s 2025-06-22 12:30:00.765878 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.60s 2025-06-22 12:30:00.765897 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.57s 2025-06-22 12:30:00.765916 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.55s 2025-06-22 12:30:00.765935 | orchestrator | Prepare test data ------------------------------------------------------- 0.52s 2025-06-22 12:30:00.765956 | orchestrator | Prepare test data ------------------------------------------------------- 0.51s 2025-06-22 12:30:00.765969 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.51s 2025-06-22 12:30:00.765992 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.50s 2025-06-22 12:30:01.110832 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.50s 2025-06-22 12:30:01.110938 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.48s 2025-06-22 12:30:01.110954 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.41s 2025-06-22 12:30:01.403557 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-06-22 12:30:01.409969 | orchestrator | + set -e 2025-06-22 12:30:01.410015 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 12:30:01.410098 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 12:30:01.410120 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 12:30:01.410132 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 12:30:01.410143 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 12:30:01.410154 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 12:30:01.410166 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 12:30:01.410177 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 12:30:01.410188 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 12:30:01.410199 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 12:30:01.410210 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 12:30:01.410221 | orchestrator | ++ export ARA=false 2025-06-22 12:30:01.410232 | orchestrator | ++ ARA=false 2025-06-22 12:30:01.410243 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 12:30:01.410254 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 12:30:01.410265 | orchestrator | ++ export TEMPEST=false 2025-06-22 12:30:01.410276 | orchestrator | ++ TEMPEST=false 2025-06-22 12:30:01.410286 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 12:30:01.410297 | orchestrator | ++ IS_ZUUL=true 2025-06-22 12:30:01.410307 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.200 2025-06-22 12:30:01.410318 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.200 2025-06-22 12:30:01.410329 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 12:30:01.410340 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 12:30:01.410351 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 12:30:01.410361 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 12:30:01.410372 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 12:30:01.410383 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 12:30:01.410393 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 12:30:01.410404 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 12:30:01.410415 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-22 12:30:01.410426 | orchestrator | + source /etc/os-release 2025-06-22 12:30:01.410436 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-06-22 12:30:01.410447 | orchestrator | ++ NAME=Ubuntu 2025-06-22 12:30:01.410458 | orchestrator | ++ VERSION_ID=24.04 2025-06-22 12:30:01.410469 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-06-22 12:30:01.410479 | orchestrator | ++ VERSION_CODENAME=noble 2025-06-22 12:30:01.410490 | orchestrator | ++ ID=ubuntu 2025-06-22 12:30:01.410500 | orchestrator | ++ ID_LIKE=debian 2025-06-22 12:30:01.410511 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-06-22 12:30:01.410522 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-06-22 12:30:01.410533 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-06-22 12:30:01.410544 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-06-22 12:30:01.410555 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-06-22 12:30:01.410567 | orchestrator | ++ LOGO=ubuntu-logo 2025-06-22 12:30:01.410579 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-06-22 12:30:01.410592 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-06-22 12:30:01.410655 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-22 12:30:01.446446 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-22 12:30:23.882538 | orchestrator | 2025-06-22 12:30:23.882704 | orchestrator | # Status of Elasticsearch 2025-06-22 12:30:23.882723 | orchestrator | 2025-06-22 12:30:23.882736 | orchestrator | + pushd /opt/configuration/contrib 2025-06-22 12:30:23.882749 | orchestrator | + echo 2025-06-22 12:30:23.882761 | orchestrator | + echo '# Status of Elasticsearch' 2025-06-22 12:30:23.882772 | orchestrator | + echo 2025-06-22 12:30:23.882783 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-06-22 12:30:24.048452 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-06-22 12:30:24.048871 | orchestrator | 2025-06-22 12:30:24.048893 | orchestrator | # Status of MariaDB 2025-06-22 12:30:24.048905 | orchestrator | 2025-06-22 12:30:24.048917 | orchestrator | + echo 2025-06-22 12:30:24.048929 | orchestrator | + echo '# Status of MariaDB' 2025-06-22 12:30:24.048941 | orchestrator | + echo 2025-06-22 12:30:24.048952 | orchestrator | + MARIADB_USER=root_shard_0 2025-06-22 12:30:24.048964 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-06-22 12:30:24.127411 | orchestrator | Reading package lists... 2025-06-22 12:30:24.489089 | orchestrator | Building dependency tree... 2025-06-22 12:30:24.489657 | orchestrator | Reading state information... 2025-06-22 12:30:24.853272 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-06-22 12:30:24.853375 | orchestrator | bc set to manually installed. 2025-06-22 12:30:24.853393 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-06-22 12:30:25.454721 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-06-22 12:30:25.455027 | orchestrator | 2025-06-22 12:30:25.455046 | orchestrator | # Status of Prometheus 2025-06-22 12:30:25.455059 | orchestrator | + echo 2025-06-22 12:30:25.455071 | orchestrator | + echo '# Status of Prometheus' 2025-06-22 12:30:25.455082 | orchestrator | + echo 2025-06-22 12:30:25.455093 | orchestrator | 2025-06-22 12:30:25.455105 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-06-22 12:30:25.516272 | orchestrator | Unauthorized 2025-06-22 12:30:25.519494 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-06-22 12:30:25.576934 | orchestrator | Unauthorized 2025-06-22 12:30:25.579926 | orchestrator | 2025-06-22 12:30:25.579974 | orchestrator | # Status of RabbitMQ 2025-06-22 12:30:25.579987 | orchestrator | 2025-06-22 12:30:25.579999 | orchestrator | + echo 2025-06-22 12:30:25.580010 | orchestrator | + echo '# Status of RabbitMQ' 2025-06-22 12:30:25.580024 | orchestrator | + echo 2025-06-22 12:30:25.580044 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-06-22 12:30:26.023194 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-06-22 12:30:26.033908 | orchestrator | 2025-06-22 12:30:26.034010 | orchestrator | + echo 2025-06-22 12:30:26.034066 | orchestrator | # Status of Redis 2025-06-22 12:30:26.034075 | orchestrator | 2025-06-22 12:30:26.034083 | orchestrator | + echo '# Status of Redis' 2025-06-22 12:30:26.034091 | orchestrator | + echo 2025-06-22 12:30:26.034099 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-06-22 12:30:26.041563 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001695s;;;0.000000;10.000000 2025-06-22 12:30:26.041886 | orchestrator | + popd 2025-06-22 12:30:26.042218 | orchestrator | 2025-06-22 12:30:26.042312 | orchestrator | + echo 2025-06-22 12:30:26.042324 | orchestrator | # Create backup of MariaDB database 2025-06-22 12:30:26.042333 | orchestrator | 2025-06-22 12:30:26.042340 | orchestrator | + echo '# Create backup of MariaDB database' 2025-06-22 12:30:26.042348 | orchestrator | + echo 2025-06-22 12:30:26.042356 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-06-22 12:30:27.795808 | orchestrator | 2025-06-22 12:30:27 | INFO  | Task e11e46a5-85f7-4899-9b99-62060efd39e7 (mariadb_backup) was prepared for execution. 2025-06-22 12:30:27.795913 | orchestrator | 2025-06-22 12:30:27 | INFO  | It takes a moment until task e11e46a5-85f7-4899-9b99-62060efd39e7 (mariadb_backup) has been started and output is visible here. 2025-06-22 12:30:31.657194 | orchestrator | 2025-06-22 12:30:31.660063 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 12:30:31.660178 | orchestrator | 2025-06-22 12:30:31.660196 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 12:30:31.660204 | orchestrator | Sunday 22 June 2025 12:30:31 +0000 (0:00:00.174) 0:00:00.174 *********** 2025-06-22 12:30:31.844438 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:30:31.970098 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:30:31.970707 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:30:31.972049 | orchestrator | 2025-06-22 12:30:31.974684 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 12:30:31.975705 | orchestrator | Sunday 22 June 2025 12:30:31 +0000 (0:00:00.315) 0:00:00.490 *********** 2025-06-22 12:30:32.525767 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-22 12:30:32.528087 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-22 12:30:32.528133 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-22 12:30:32.528156 | orchestrator | 2025-06-22 12:30:32.528848 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-22 12:30:32.530306 | orchestrator | 2025-06-22 12:30:32.530637 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-22 12:30:32.531832 | orchestrator | Sunday 22 June 2025 12:30:32 +0000 (0:00:00.557) 0:00:01.047 *********** 2025-06-22 12:30:32.917677 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 12:30:32.922155 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-22 12:30:32.922232 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-22 12:30:32.922248 | orchestrator | 2025-06-22 12:30:32.922261 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 12:30:32.922274 | orchestrator | Sunday 22 June 2025 12:30:32 +0000 (0:00:00.389) 0:00:01.437 *********** 2025-06-22 12:30:33.431484 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 12:30:33.432389 | orchestrator | 2025-06-22 12:30:33.433474 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-06-22 12:30:33.434772 | orchestrator | Sunday 22 June 2025 12:30:33 +0000 (0:00:00.514) 0:00:01.951 *********** 2025-06-22 12:30:36.537674 | orchestrator | ok: [testbed-node-1] 2025-06-22 12:30:36.537811 | orchestrator | ok: [testbed-node-0] 2025-06-22 12:30:36.537827 | orchestrator | ok: [testbed-node-2] 2025-06-22 12:30:36.538667 | orchestrator | 2025-06-22 12:30:36.542972 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-06-22 12:30:36.543765 | orchestrator | Sunday 22 June 2025 12:30:36 +0000 (0:00:03.102) 0:00:05.054 *********** 2025-06-22 12:30:54.374699 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-22 12:30:54.374811 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-06-22 12:30:54.374829 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-22 12:30:54.374842 | orchestrator | mariadb_bootstrap_restart 2025-06-22 12:30:54.443990 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:30:54.445488 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:30:54.446094 | orchestrator | changed: [testbed-node-0] 2025-06-22 12:30:54.449073 | orchestrator | 2025-06-22 12:30:54.449922 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-22 12:30:54.450877 | orchestrator | skipping: no hosts matched 2025-06-22 12:30:54.451899 | orchestrator | 2025-06-22 12:30:54.452409 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-22 12:30:54.453676 | orchestrator | skipping: no hosts matched 2025-06-22 12:30:54.454109 | orchestrator | 2025-06-22 12:30:54.455452 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-22 12:30:54.455933 | orchestrator | skipping: no hosts matched 2025-06-22 12:30:54.459430 | orchestrator | 2025-06-22 12:30:54.460142 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-22 12:30:54.460695 | orchestrator | 2025-06-22 12:30:54.461656 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-22 12:30:54.462401 | orchestrator | Sunday 22 June 2025 12:30:54 +0000 (0:00:17.910) 0:00:22.964 *********** 2025-06-22 12:30:54.625277 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:30:54.748628 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:30:54.748971 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:30:54.751154 | orchestrator | 2025-06-22 12:30:54.751747 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-22 12:30:54.753372 | orchestrator | Sunday 22 June 2025 12:30:54 +0000 (0:00:00.303) 0:00:23.268 *********** 2025-06-22 12:30:55.120367 | orchestrator | skipping: [testbed-node-0] 2025-06-22 12:30:55.164416 | orchestrator | skipping: [testbed-node-1] 2025-06-22 12:30:55.165292 | orchestrator | skipping: [testbed-node-2] 2025-06-22 12:30:55.165769 | orchestrator | 2025-06-22 12:30:55.167052 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:30:55.167401 | orchestrator | 2025-06-22 12:30:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 12:30:55.167694 | orchestrator | 2025-06-22 12:30:55 | INFO  | Please wait and do not abort execution. 2025-06-22 12:30:55.170459 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 12:30:55.174155 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 12:30:55.174507 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 12:30:55.174997 | orchestrator | 2025-06-22 12:30:55.175936 | orchestrator | 2025-06-22 12:30:55.175970 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:30:55.176465 | orchestrator | Sunday 22 June 2025 12:30:55 +0000 (0:00:00.418) 0:00:23.686 *********** 2025-06-22 12:30:55.177075 | orchestrator | =============================================================================== 2025-06-22 12:30:55.180749 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 17.91s 2025-06-22 12:30:55.180785 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.10s 2025-06-22 12:30:55.180796 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2025-06-22 12:30:55.180807 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.51s 2025-06-22 12:30:55.180818 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.42s 2025-06-22 12:30:55.180829 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2025-06-22 12:30:55.180839 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-06-22 12:30:55.180850 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2025-06-22 12:30:55.824579 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-06-22 12:30:55.831727 | orchestrator | + set -e 2025-06-22 12:30:55.831768 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 12:30:55.831783 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 12:30:55.831796 | orchestrator | ++ INTERACTIVE=false 2025-06-22 12:30:55.831807 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 12:30:55.831818 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 12:30:55.831829 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-22 12:30:55.832504 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-22 12:30:55.837421 | orchestrator | 2025-06-22 12:30:55.837454 | orchestrator | # OpenStack endpoints 2025-06-22 12:30:55.837466 | orchestrator | 2025-06-22 12:30:55.837478 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 12:30:55.837490 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 12:30:55.837502 | orchestrator | + export OS_CLOUD=admin 2025-06-22 12:30:55.837512 | orchestrator | + OS_CLOUD=admin 2025-06-22 12:30:55.837524 | orchestrator | + echo 2025-06-22 12:30:55.837535 | orchestrator | + echo '# OpenStack endpoints' 2025-06-22 12:30:55.837546 | orchestrator | + echo 2025-06-22 12:30:55.837558 | orchestrator | + openstack endpoint list 2025-06-22 12:30:59.257506 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-22 12:30:59.257574 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-06-22 12:30:59.257657 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-22 12:30:59.257679 | orchestrator | | 199ca83e928441fca32b58eec263832e | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-06-22 12:30:59.257698 | orchestrator | | 29d28a3b6ff8429ea55ddc02f4bb8e65 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-06-22 12:30:59.257709 | orchestrator | | 309213f3cc954388b7769ec73463f16d | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-06-22 12:30:59.257720 | orchestrator | | 3ef4428334654e8996ac5fd3bdad585a | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-06-22 12:30:59.257731 | orchestrator | | 455d951371974187b403474ab43e7f32 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-22 12:30:59.257742 | orchestrator | | 45797949d5c5441e9142c2183528be67 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-06-22 12:30:59.257753 | orchestrator | | 48cc8df6a86948d7987fa2b79fa3faaa | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-06-22 12:30:59.257764 | orchestrator | | 58283ab065cb4d4d85ebc3574b0b5b59 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-06-22 12:30:59.257774 | orchestrator | | 661219c497094eecb15c93fe9d38e98c | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-06-22 12:30:59.257785 | orchestrator | | 7357958cceaa4d0398aad15c0f616750 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-22 12:30:59.257796 | orchestrator | | 73bccf55982d4062bdb6a2360a646cf1 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-06-22 12:30:59.257807 | orchestrator | | 77e3df37293c4da9ad14818a4dda4eb1 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-06-22 12:30:59.257817 | orchestrator | | 784d5b90ae4b4da9bf71edfb09d208c5 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-06-22 12:30:59.257829 | orchestrator | | 7be75afbf90b4144801b29c48ff6f1c3 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-22 12:30:59.257840 | orchestrator | | 8af597cdc33e430988237c85a36a3fd4 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-22 12:30:59.257851 | orchestrator | | 9a77c331e3f344778e9aace36b5bfeb0 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-06-22 12:30:59.257861 | orchestrator | | b25652a3c92d4048b552c8706d875035 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-06-22 12:30:59.257871 | orchestrator | | b89e680e74db4458b9648bf8e4f73663 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-06-22 12:30:59.257882 | orchestrator | | ba838f08b5074afb83bffd86fb966f18 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-06-22 12:30:59.257901 | orchestrator | | c591ad04ce0749a78e4a7a3f41fe5d44 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-06-22 12:30:59.257926 | orchestrator | | c8cd7418ad9e4661bf16cff46ab8b866 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-06-22 12:30:59.257952 | orchestrator | | ed6832a658bd4649a1e5b46aa1a9e6fb | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-06-22 12:30:59.257963 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-22 12:30:59.447317 | orchestrator | 2025-06-22 12:30:59.447405 | orchestrator | # Cinder 2025-06-22 12:30:59.447421 | orchestrator | 2025-06-22 12:30:59.447434 | orchestrator | + echo 2025-06-22 12:30:59.447446 | orchestrator | + echo '# Cinder' 2025-06-22 12:30:59.447457 | orchestrator | + echo 2025-06-22 12:30:59.447469 | orchestrator | + openstack volume service list 2025-06-22 12:31:02.402959 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-22 12:31:02.403043 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-06-22 12:31:02.403057 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-22 12:31:02.403068 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-22T12:30:52.000000 | 2025-06-22 12:31:02.403079 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-22T12:31:02.000000 | 2025-06-22 12:31:02.403090 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-22T12:31:01.000000 | 2025-06-22 12:31:02.403101 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-06-22T12:31:02.000000 | 2025-06-22 12:31:02.403111 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-06-22T12:30:54.000000 | 2025-06-22 12:31:02.403137 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-06-22T12:30:54.000000 | 2025-06-22 12:31:02.403148 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-06-22T12:30:57.000000 | 2025-06-22 12:31:02.403159 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-06-22T12:30:57.000000 | 2025-06-22 12:31:02.403170 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-06-22T12:30:57.000000 | 2025-06-22 12:31:02.403181 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-22 12:31:02.588115 | orchestrator | 2025-06-22 12:31:02.588203 | orchestrator | # Neutron 2025-06-22 12:31:02.588218 | orchestrator | 2025-06-22 12:31:02.588230 | orchestrator | + echo 2025-06-22 12:31:02.588243 | orchestrator | + echo '# Neutron' 2025-06-22 12:31:02.588254 | orchestrator | + echo 2025-06-22 12:31:02.588266 | orchestrator | + openstack network agent list 2025-06-22 12:31:05.545543 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-22 12:31:05.545719 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-06-22 12:31:05.545739 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-22 12:31:05.545751 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-06-22 12:31:05.545763 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-06-22 12:31:05.545802 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-06-22 12:31:05.545813 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-06-22 12:31:05.545824 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-06-22 12:31:05.545835 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-06-22 12:31:05.545846 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-22 12:31:05.545857 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-22 12:31:05.545867 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-22 12:31:05.545878 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-22 12:31:05.838488 | orchestrator | + openstack network service provider list 2025-06-22 12:31:08.540086 | orchestrator | +---------------+------+---------+ 2025-06-22 12:31:08.540198 | orchestrator | | Service Type | Name | Default | 2025-06-22 12:31:08.540212 | orchestrator | +---------------+------+---------+ 2025-06-22 12:31:08.540224 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-06-22 12:31:08.540235 | orchestrator | +---------------+------+---------+ 2025-06-22 12:31:08.803338 | orchestrator | 2025-06-22 12:31:08.803434 | orchestrator | # Nova 2025-06-22 12:31:08.803449 | orchestrator | 2025-06-22 12:31:08.803460 | orchestrator | + echo 2025-06-22 12:31:08.803472 | orchestrator | + echo '# Nova' 2025-06-22 12:31:08.803483 | orchestrator | + echo 2025-06-22 12:31:08.803494 | orchestrator | + openstack compute service list 2025-06-22 12:31:12.031463 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-22 12:31:12.032354 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-06-22 12:31:12.032387 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-22 12:31:12.032400 | orchestrator | | cba3013e-9228-4252-a996-e7aeb40a905c | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-22T12:31:04.000000 | 2025-06-22 12:31:12.032411 | orchestrator | | 2d846127-c3b6-4141-beb5-63cec8a809f1 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-22T12:31:08.000000 | 2025-06-22 12:31:12.032422 | orchestrator | | 08bc517c-e289-4e20-98a0-619661e43bd6 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-22T12:31:09.000000 | 2025-06-22 12:31:12.032433 | orchestrator | | f20da51c-d2bf-46e3-a24d-410d74a38338 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-06-22T12:31:02.000000 | 2025-06-22 12:31:12.032443 | orchestrator | | 283d369a-b602-4f8e-af7b-9e4971470259 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-06-22T12:31:04.000000 | 2025-06-22 12:31:12.032454 | orchestrator | | 5ca5fcc8-3cef-401a-ac73-9e7b23ae1a65 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-06-22T12:31:04.000000 | 2025-06-22 12:31:12.032484 | orchestrator | | 0cbbea51-1545-45d9-b251-56ed186b4f68 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-06-22T12:31:07.000000 | 2025-06-22 12:31:12.032495 | orchestrator | | 3076bcdb-2526-4c4b-8fc0-5d6aa8d62d04 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-06-22T12:31:07.000000 | 2025-06-22 12:31:12.032506 | orchestrator | | c7c51e43-f054-41d3-9d16-3dc0be543f07 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-06-22T12:31:07.000000 | 2025-06-22 12:31:12.032541 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-22 12:31:12.282297 | orchestrator | + openstack hypervisor list 2025-06-22 12:31:16.601800 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-22 12:31:16.601901 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-06-22 12:31:16.601915 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-22 12:31:16.601925 | orchestrator | | 10dae719-12ee-4073-89ff-7bc414bda89e | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-06-22 12:31:16.601936 | orchestrator | | 02236983-01ea-4574-924b-b2941dd7844a | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-06-22 12:31:16.601946 | orchestrator | | 1745e7bc-5d9e-427a-8b78-a60f4fda3966 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-06-22 12:31:16.601956 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-22 12:31:16.866191 | orchestrator | 2025-06-22 12:31:16.866292 | orchestrator | # Run OpenStack test play 2025-06-22 12:31:16.866308 | orchestrator | 2025-06-22 12:31:16.866321 | orchestrator | + echo 2025-06-22 12:31:16.866333 | orchestrator | + echo '# Run OpenStack test play' 2025-06-22 12:31:16.866345 | orchestrator | + echo 2025-06-22 12:31:16.866356 | orchestrator | + osism apply --environment openstack test 2025-06-22 12:31:18.548423 | orchestrator | 2025-06-22 12:31:18 | INFO  | Trying to run play test in environment openstack 2025-06-22 12:31:18.552644 | orchestrator | Registering Redlock._acquired_script 2025-06-22 12:31:18.552680 | orchestrator | Registering Redlock._extend_script 2025-06-22 12:31:18.552693 | orchestrator | Registering Redlock._release_script 2025-06-22 12:31:18.610203 | orchestrator | 2025-06-22 12:31:18 | INFO  | Task 869c4d33-1a3b-4034-b974-424ec5a04177 (test) was prepared for execution. 2025-06-22 12:31:18.610287 | orchestrator | 2025-06-22 12:31:18 | INFO  | It takes a moment until task 869c4d33-1a3b-4034-b974-424ec5a04177 (test) has been started and output is visible here. 2025-06-22 12:31:22.633126 | orchestrator | 2025-06-22 12:31:22.633548 | orchestrator | PLAY [Create test project] ***************************************************** 2025-06-22 12:31:22.634687 | orchestrator | 2025-06-22 12:31:22.635767 | orchestrator | TASK [Create test domain] ****************************************************** 2025-06-22 12:31:22.637363 | orchestrator | Sunday 22 June 2025 12:31:22 +0000 (0:00:00.078) 0:00:00.078 *********** 2025-06-22 12:31:26.202959 | orchestrator | changed: [localhost] 2025-06-22 12:31:26.203090 | orchestrator | 2025-06-22 12:31:26.204437 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-06-22 12:31:26.204476 | orchestrator | Sunday 22 June 2025 12:31:26 +0000 (0:00:03.569) 0:00:03.647 *********** 2025-06-22 12:31:30.331321 | orchestrator | changed: [localhost] 2025-06-22 12:31:30.331453 | orchestrator | 2025-06-22 12:31:30.331472 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-06-22 12:31:30.331596 | orchestrator | Sunday 22 June 2025 12:31:30 +0000 (0:00:04.129) 0:00:07.777 *********** 2025-06-22 12:31:36.135273 | orchestrator | changed: [localhost] 2025-06-22 12:31:36.135892 | orchestrator | 2025-06-22 12:31:36.136688 | orchestrator | TASK [Create test project] ***************************************************** 2025-06-22 12:31:36.137092 | orchestrator | Sunday 22 June 2025 12:31:36 +0000 (0:00:05.804) 0:00:13.582 *********** 2025-06-22 12:31:39.405798 | orchestrator | changed: [localhost] 2025-06-22 12:31:39.406279 | orchestrator | 2025-06-22 12:31:39.407086 | orchestrator | TASK [Create test user] ******************************************************** 2025-06-22 12:31:39.407947 | orchestrator | Sunday 22 June 2025 12:31:39 +0000 (0:00:03.270) 0:00:16.852 *********** 2025-06-22 12:31:43.466138 | orchestrator | changed: [localhost] 2025-06-22 12:31:43.466518 | orchestrator | 2025-06-22 12:31:43.468041 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-06-22 12:31:43.468894 | orchestrator | Sunday 22 June 2025 12:31:43 +0000 (0:00:04.059) 0:00:20.911 *********** 2025-06-22 12:31:55.193941 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-06-22 12:31:55.194116 | orchestrator | changed: [localhost] => (item=member) 2025-06-22 12:31:55.194754 | orchestrator | changed: [localhost] => (item=creator) 2025-06-22 12:31:55.198286 | orchestrator | 2025-06-22 12:31:55.198325 | orchestrator | TASK [Create test server group] ************************************************ 2025-06-22 12:31:55.198745 | orchestrator | Sunday 22 June 2025 12:31:55 +0000 (0:00:11.725) 0:00:32.637 *********** 2025-06-22 12:31:59.974899 | orchestrator | changed: [localhost] 2025-06-22 12:31:59.975017 | orchestrator | 2025-06-22 12:31:59.975034 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-06-22 12:31:59.979669 | orchestrator | Sunday 22 June 2025 12:31:59 +0000 (0:00:04.781) 0:00:37.419 *********** 2025-06-22 12:32:05.906745 | orchestrator | changed: [localhost] 2025-06-22 12:32:05.907644 | orchestrator | 2025-06-22 12:32:05.907863 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-06-22 12:32:05.908243 | orchestrator | Sunday 22 June 2025 12:32:05 +0000 (0:00:05.934) 0:00:43.353 *********** 2025-06-22 12:32:10.092134 | orchestrator | changed: [localhost] 2025-06-22 12:32:10.093199 | orchestrator | 2025-06-22 12:32:10.094302 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-06-22 12:32:10.095033 | orchestrator | Sunday 22 June 2025 12:32:10 +0000 (0:00:04.184) 0:00:47.538 *********** 2025-06-22 12:32:13.997994 | orchestrator | changed: [localhost] 2025-06-22 12:32:13.998649 | orchestrator | 2025-06-22 12:32:13.999907 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-06-22 12:32:14.001852 | orchestrator | Sunday 22 June 2025 12:32:13 +0000 (0:00:03.906) 0:00:51.445 *********** 2025-06-22 12:32:17.855921 | orchestrator | changed: [localhost] 2025-06-22 12:32:17.856026 | orchestrator | 2025-06-22 12:32:17.856043 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-06-22 12:32:17.856382 | orchestrator | Sunday 22 June 2025 12:32:17 +0000 (0:00:03.858) 0:00:55.303 *********** 2025-06-22 12:32:22.157879 | orchestrator | changed: [localhost] 2025-06-22 12:32:22.157994 | orchestrator | 2025-06-22 12:32:22.159218 | orchestrator | TASK [Create test network topology] ******************************************** 2025-06-22 12:32:22.159248 | orchestrator | Sunday 22 June 2025 12:32:22 +0000 (0:00:04.301) 0:00:59.604 *********** 2025-06-22 12:32:38.618080 | orchestrator | changed: [localhost] 2025-06-22 12:32:38.618204 | orchestrator | 2025-06-22 12:32:38.618222 | orchestrator | TASK [Create test instances] *************************************************** 2025-06-22 12:32:38.618235 | orchestrator | Sunday 22 June 2025 12:32:38 +0000 (0:00:16.452) 0:01:16.057 *********** 2025-06-22 12:34:52.504154 | orchestrator | changed: [localhost] => (item=test) 2025-06-22 12:34:52.504278 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-22 12:34:52.504294 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-22 12:34:52.505497 | orchestrator | 2025-06-22 12:34:52.507914 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-22 12:35:22.506993 | orchestrator | 2025-06-22 12:35:22.507114 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-22 12:35:52.507959 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-22 12:35:52.508084 | orchestrator | 2025-06-22 12:35:52.508102 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-22 12:36:00.417571 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-22 12:36:00.418193 | orchestrator | 2025-06-22 12:36:00.418228 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-06-22 12:36:00.418243 | orchestrator | Sunday 22 June 2025 12:36:00 +0000 (0:03:21.806) 0:04:37.863 *********** 2025-06-22 12:36:24.833171 | orchestrator | changed: [localhost] => (item=test) 2025-06-22 12:36:24.833289 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-22 12:36:24.833305 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-22 12:36:24.833316 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-22 12:36:24.833512 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-22 12:36:24.835214 | orchestrator | 2025-06-22 12:36:24.836423 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-06-22 12:36:24.837208 | orchestrator | Sunday 22 June 2025 12:36:24 +0000 (0:00:24.413) 0:05:02.277 *********** 2025-06-22 12:36:57.154491 | orchestrator | changed: [localhost] => (item=test) 2025-06-22 12:36:57.154769 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-22 12:36:57.154790 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-22 12:36:57.154801 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-22 12:36:57.154810 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-22 12:36:57.154821 | orchestrator | 2025-06-22 12:36:57.154832 | orchestrator | TASK [Create test volume] ****************************************************** 2025-06-22 12:36:57.154843 | orchestrator | Sunday 22 June 2025 12:36:57 +0000 (0:00:32.314) 0:05:34.592 *********** 2025-06-22 12:37:03.856367 | orchestrator | changed: [localhost] 2025-06-22 12:37:03.856731 | orchestrator | 2025-06-22 12:37:03.857211 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-06-22 12:37:03.859257 | orchestrator | Sunday 22 June 2025 12:37:03 +0000 (0:00:06.712) 0:05:41.304 *********** 2025-06-22 12:37:17.307966 | orchestrator | changed: [localhost] 2025-06-22 12:37:17.308086 | orchestrator | 2025-06-22 12:37:17.308104 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-06-22 12:37:17.308119 | orchestrator | Sunday 22 June 2025 12:37:17 +0000 (0:00:13.446) 0:05:54.751 *********** 2025-06-22 12:37:22.251807 | orchestrator | ok: [localhost] 2025-06-22 12:37:22.253033 | orchestrator | 2025-06-22 12:37:22.253509 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-06-22 12:37:22.254622 | orchestrator | Sunday 22 June 2025 12:37:22 +0000 (0:00:04.947) 0:05:59.698 *********** 2025-06-22 12:37:22.299480 | orchestrator | ok: [localhost] => { 2025-06-22 12:37:22.301875 | orchestrator |  "msg": "192.168.112.199" 2025-06-22 12:37:22.302664 | orchestrator | } 2025-06-22 12:37:22.303524 | orchestrator | 2025-06-22 12:37:22.304697 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 12:37:22.304960 | orchestrator | 2025-06-22 12:37:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 12:37:22.305695 | orchestrator | 2025-06-22 12:37:22 | INFO  | Please wait and do not abort execution. 2025-06-22 12:37:22.306919 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 12:37:22.307906 | orchestrator | 2025-06-22 12:37:22.309226 | orchestrator | 2025-06-22 12:37:22.309771 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 12:37:22.310526 | orchestrator | Sunday 22 June 2025 12:37:22 +0000 (0:00:00.046) 0:05:59.745 *********** 2025-06-22 12:37:22.311435 | orchestrator | =============================================================================== 2025-06-22 12:37:22.312045 | orchestrator | Create test instances ------------------------------------------------- 201.81s 2025-06-22 12:37:22.312522 | orchestrator | Add tag to instances --------------------------------------------------- 32.31s 2025-06-22 12:37:22.312969 | orchestrator | Add metadata to instances ---------------------------------------------- 24.41s 2025-06-22 12:37:22.313678 | orchestrator | Create test network topology ------------------------------------------- 16.45s 2025-06-22 12:37:22.314461 | orchestrator | Attach test volume ----------------------------------------------------- 13.45s 2025-06-22 12:37:22.315323 | orchestrator | Add member roles to user test ------------------------------------------ 11.73s 2025-06-22 12:37:22.316007 | orchestrator | Create test volume ------------------------------------------------------ 6.71s 2025-06-22 12:37:22.316527 | orchestrator | Create ssh security group ----------------------------------------------- 5.93s 2025-06-22 12:37:22.317216 | orchestrator | Add manager role to user test-admin ------------------------------------- 5.80s 2025-06-22 12:37:22.317752 | orchestrator | Create floating ip address ---------------------------------------------- 4.95s 2025-06-22 12:37:22.318545 | orchestrator | Create test server group ------------------------------------------------ 4.78s 2025-06-22 12:37:22.319081 | orchestrator | Create test keypair ----------------------------------------------------- 4.30s 2025-06-22 12:37:22.319604 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.19s 2025-06-22 12:37:22.320115 | orchestrator | Create test-admin user -------------------------------------------------- 4.13s 2025-06-22 12:37:22.320636 | orchestrator | Create test user -------------------------------------------------------- 4.06s 2025-06-22 12:37:22.321113 | orchestrator | Create icmp security group ---------------------------------------------- 3.91s 2025-06-22 12:37:22.321788 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.86s 2025-06-22 12:37:22.322656 | orchestrator | Create test domain ------------------------------------------------------ 3.57s 2025-06-22 12:37:22.323759 | orchestrator | Create test project ----------------------------------------------------- 3.27s 2025-06-22 12:37:22.324653 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-06-22 12:37:22.817970 | orchestrator | + server_list 2025-06-22 12:37:22.818113 | orchestrator | + openstack --os-cloud test server list 2025-06-22 12:37:26.679281 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-22 12:37:26.679379 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-06-22 12:37:26.679392 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-22 12:37:26.679401 | orchestrator | | 9f4ef58d-220e-4e97-8a16-0b14148e6ad1 | test-4 | ACTIVE | auto_allocated_network=10.42.0.14, 192.168.112.181 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 12:37:26.679411 | orchestrator | | 7268e2a6-6ceb-404f-94ec-247997402a2e | test-3 | ACTIVE | auto_allocated_network=10.42.0.41, 192.168.112.175 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 12:37:26.679420 | orchestrator | | c2c14b71-83cf-434c-a5c9-ee430b1a7ca2 | test-2 | ACTIVE | auto_allocated_network=10.42.0.60, 192.168.112.200 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 12:37:26.679430 | orchestrator | | 0f59b47b-7901-4ea1-bdfc-04c498e920d2 | test-1 | ACTIVE | auto_allocated_network=10.42.0.6, 192.168.112.185 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 12:37:26.679439 | orchestrator | | a3eb8228-4254-479e-ba79-7e4217aee5f1 | test | ACTIVE | auto_allocated_network=10.42.0.62, 192.168.112.199 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 12:37:26.679448 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-22 12:37:26.994370 | orchestrator | + openstack --os-cloud test server show test 2025-06-22 12:37:30.546504 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 12:37:30.546646 | orchestrator | | Field | Value | 2025-06-22 12:37:30.546665 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 12:37:30.546677 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 12:37:30.546707 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 12:37:30.546720 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 12:37:30.546731 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-06-22 12:37:30.546751 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 12:37:30.546763 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 12:37:30.546775 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 12:37:30.546787 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 12:37:30.546815 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 12:37:30.546827 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 12:37:30.546839 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 12:37:30.546859 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 12:37:30.546870 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 12:37:30.546886 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 12:37:30.546897 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 12:37:30.546909 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T12:33:10.000000 | 2025-06-22 12:37:30.546920 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 12:37:30.546931 | orchestrator | | accessIPv4 | | 2025-06-22 12:37:30.546943 | orchestrator | | accessIPv6 | | 2025-06-22 12:37:30.546954 | orchestrator | | addresses | auto_allocated_network=10.42.0.62, 192.168.112.199 | 2025-06-22 12:37:30.546973 | orchestrator | | config_drive | | 2025-06-22 12:37:30.546985 | orchestrator | | created | 2025-06-22T12:32:48Z | 2025-06-22 12:37:30.547003 | orchestrator | | description | None | 2025-06-22 12:37:30.547015 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 12:37:30.547028 | orchestrator | | hostId | 2c6a0a38cc8da5fb8710d351a3858d123b6b704aec864951e1610720 | 2025-06-22 12:37:30.547045 | orchestrator | | host_status | None | 2025-06-22 12:37:30.547058 | orchestrator | | id | a3eb8228-4254-479e-ba79-7e4217aee5f1 | 2025-06-22 12:37:30.547070 | orchestrator | | image | Cirros 0.6.2 (1b6d864c-f9ce-4ca4-9fdd-7b8e9c4446f1) | 2025-06-22 12:37:30.547083 | orchestrator | | key_name | test | 2025-06-22 12:37:30.547095 | orchestrator | | locked | False | 2025-06-22 12:37:30.547108 | orchestrator | | locked_reason | None | 2025-06-22 12:37:30.547121 | orchestrator | | name | test | 2025-06-22 12:37:30.547140 | orchestrator | | pinned_availability_zone | None | 2025-06-22 12:37:30.547159 | orchestrator | | progress | 0 | 2025-06-22 12:37:30.547172 | orchestrator | | project_id | 917c261bada14dc5a756697c4ebe90df | 2025-06-22 12:37:30.547184 | orchestrator | | properties | hostname='test' | 2025-06-22 12:37:30.547201 | orchestrator | | security_groups | name='ssh' | 2025-06-22 12:37:30.547215 | orchestrator | | | name='icmp' | 2025-06-22 12:37:30.547227 | orchestrator | | server_groups | None | 2025-06-22 12:37:30.547240 | orchestrator | | status | ACTIVE | 2025-06-22 12:37:30.547252 | orchestrator | | tags | test | 2025-06-22 12:37:30.547265 | orchestrator | | trusted_image_certificates | None | 2025-06-22 12:37:30.547277 | orchestrator | | updated | 2025-06-22T12:36:05Z | 2025-06-22 12:37:30.547295 | orchestrator | | user_id | bdd165215a61467daedee98a33d681e9 | 2025-06-22 12:37:30.547316 | orchestrator | | volumes_attached | delete_on_termination='False', id='c63078c2-c9d2-40db-b0a7-e55b3470cd20' | 2025-06-22 12:37:30.550834 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 12:37:30.801975 | orchestrator | + openstack --os-cloud test server show test-1 2025-06-22 12:37:34.052907 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 12:37:34.053046 | orchestrator | | Field | Value | 2025-06-22 12:37:34.053092 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 12:37:34.053106 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 12:37:34.053118 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 12:37:34.053129 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 12:37:34.053140 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-06-22 12:37:34.053151 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 12:37:34.053184 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 12:37:34.053196 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 12:37:34.053207 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 12:37:34.053239 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 12:37:34.053251 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 12:37:34.053262 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 12:37:34.053273 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 12:37:34.053284 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 12:37:34.053295 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 12:37:34.053314 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 12:37:34.053344 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T12:33:53.000000 | 2025-06-22 12:37:34.053378 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 12:37:34.053399 | orchestrator | | accessIPv4 | | 2025-06-22 12:37:34.053420 | orchestrator | | accessIPv6 | | 2025-06-22 12:37:34.053440 | orchestrator | | addresses | auto_allocated_network=10.42.0.6, 192.168.112.185 | 2025-06-22 12:37:34.053462 | orchestrator | | config_drive | | 2025-06-22 12:37:34.053475 | orchestrator | | created | 2025-06-22T12:33:31Z | 2025-06-22 12:37:34.053494 | orchestrator | | description | None | 2025-06-22 12:37:34.053507 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 12:37:34.053520 | orchestrator | | hostId | 91bbfee680728330cbeeea231fe62e938822a5afdc3d041d59d86d90 | 2025-06-22 12:37:34.053532 | orchestrator | | host_status | None | 2025-06-22 12:37:34.053545 | orchestrator | | id | 0f59b47b-7901-4ea1-bdfc-04c498e920d2 | 2025-06-22 12:37:34.053611 | orchestrator | | image | Cirros 0.6.2 (1b6d864c-f9ce-4ca4-9fdd-7b8e9c4446f1) | 2025-06-22 12:37:34.053627 | orchestrator | | key_name | test | 2025-06-22 12:37:34.053640 | orchestrator | | locked | False | 2025-06-22 12:37:34.053652 | orchestrator | | locked_reason | None | 2025-06-22 12:37:34.053664 | orchestrator | | name | test-1 | 2025-06-22 12:37:34.053683 | orchestrator | | pinned_availability_zone | None | 2025-06-22 12:37:34.053697 | orchestrator | | progress | 0 | 2025-06-22 12:37:34.053716 | orchestrator | | project_id | 917c261bada14dc5a756697c4ebe90df | 2025-06-22 12:37:34.053728 | orchestrator | | properties | hostname='test-1' | 2025-06-22 12:37:34.053741 | orchestrator | | security_groups | name='ssh' | 2025-06-22 12:37:34.053754 | orchestrator | | | name='icmp' | 2025-06-22 12:37:34.053773 | orchestrator | | server_groups | None | 2025-06-22 12:37:34.053786 | orchestrator | | status | ACTIVE | 2025-06-22 12:37:34.053799 | orchestrator | | tags | test | 2025-06-22 12:37:34.053812 | orchestrator | | trusted_image_certificates | None | 2025-06-22 12:37:34.053825 | orchestrator | | updated | 2025-06-22T12:36:09Z | 2025-06-22 12:37:34.053843 | orchestrator | | user_id | bdd165215a61467daedee98a33d681e9 | 2025-06-22 12:37:34.053857 | orchestrator | | volumes_attached | | 2025-06-22 12:37:34.057217 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 12:37:34.319961 | orchestrator | + openstack --os-cloud test server show test-2 2025-06-22 12:37:37.507684 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 12:37:37.507879 | orchestrator | | Field | Value | 2025-06-22 12:37:37.507959 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 12:37:37.507972 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 12:37:37.507984 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 12:37:37.507995 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 12:37:37.508006 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-06-22 12:37:37.508017 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 12:37:37.508028 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 12:37:37.508039 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 12:37:37.508050 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 12:37:37.508100 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 12:37:37.508122 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 12:37:37.508153 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 12:37:37.508174 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 12:37:37.508194 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 12:37:37.508216 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 12:37:37.508237 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 12:37:37.508258 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T12:34:35.000000 | 2025-06-22 12:37:37.508278 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 12:37:37.508299 | orchestrator | | accessIPv4 | | 2025-06-22 12:37:37.508321 | orchestrator | | accessIPv6 | | 2025-06-22 12:37:37.508343 | orchestrator | | addresses | auto_allocated_network=10.42.0.60, 192.168.112.200 | 2025-06-22 12:37:37.508371 | orchestrator | | config_drive | | 2025-06-22 12:37:37.508392 | orchestrator | | created | 2025-06-22T12:34:11Z | 2025-06-22 12:37:37.508403 | orchestrator | | description | None | 2025-06-22 12:37:37.508414 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 12:37:37.508425 | orchestrator | | hostId | c4430bb543a8d91a2c2bfb97e2eef9e936c1aad88e81da0d55fa8f4b | 2025-06-22 12:37:37.508436 | orchestrator | | host_status | None | 2025-06-22 12:37:37.508447 | orchestrator | | id | c2c14b71-83cf-434c-a5c9-ee430b1a7ca2 | 2025-06-22 12:37:37.508458 | orchestrator | | image | Cirros 0.6.2 (1b6d864c-f9ce-4ca4-9fdd-7b8e9c4446f1) | 2025-06-22 12:37:37.508469 | orchestrator | | key_name | test | 2025-06-22 12:37:37.508480 | orchestrator | | locked | False | 2025-06-22 12:37:37.508491 | orchestrator | | locked_reason | None | 2025-06-22 12:37:37.508512 | orchestrator | | name | test-2 | 2025-06-22 12:37:37.508529 | orchestrator | | pinned_availability_zone | None | 2025-06-22 12:37:37.508552 | orchestrator | | progress | 0 | 2025-06-22 12:37:37.508595 | orchestrator | | project_id | 917c261bada14dc5a756697c4ebe90df | 2025-06-22 12:37:37.508614 | orchestrator | | properties | hostname='test-2' | 2025-06-22 12:37:37.508633 | orchestrator | | security_groups | name='ssh' | 2025-06-22 12:37:37.508652 | orchestrator | | | name='icmp' | 2025-06-22 12:37:37.508664 | orchestrator | | server_groups | None | 2025-06-22 12:37:37.508675 | orchestrator | | status | ACTIVE | 2025-06-22 12:37:37.508686 | orchestrator | | tags | test | 2025-06-22 12:37:37.508697 | orchestrator | | trusted_image_certificates | None | 2025-06-22 12:37:37.508717 | orchestrator | | updated | 2025-06-22T12:36:14Z | 2025-06-22 12:37:37.508742 | orchestrator | | user_id | bdd165215a61467daedee98a33d681e9 | 2025-06-22 12:37:37.508754 | orchestrator | | volumes_attached | | 2025-06-22 12:37:37.512906 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 12:37:37.795011 | orchestrator | + openstack --os-cloud test server show test-3 2025-06-22 12:37:40.886799 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 12:37:40.886906 | orchestrator | | Field | Value | 2025-06-22 12:37:40.886925 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 12:37:40.886938 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 12:37:40.886949 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 12:37:40.886960 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 12:37:40.886971 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-06-22 12:37:40.887008 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 12:37:40.887034 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 12:37:40.887046 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 12:37:40.887057 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 12:37:40.887086 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 12:37:40.887098 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 12:37:40.887109 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 12:37:40.887120 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 12:37:40.887131 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 12:37:40.887142 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 12:37:40.887153 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 12:37:40.887171 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T12:35:11.000000 | 2025-06-22 12:37:40.887182 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 12:37:40.887198 | orchestrator | | accessIPv4 | | 2025-06-22 12:37:40.887209 | orchestrator | | accessIPv6 | | 2025-06-22 12:37:40.887221 | orchestrator | | addresses | auto_allocated_network=10.42.0.41, 192.168.112.175 | 2025-06-22 12:37:40.887238 | orchestrator | | config_drive | | 2025-06-22 12:37:40.887249 | orchestrator | | created | 2025-06-22T12:34:55Z | 2025-06-22 12:37:40.887260 | orchestrator | | description | None | 2025-06-22 12:37:40.887271 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 12:37:40.887283 | orchestrator | | hostId | 91bbfee680728330cbeeea231fe62e938822a5afdc3d041d59d86d90 | 2025-06-22 12:37:40.887302 | orchestrator | | host_status | None | 2025-06-22 12:37:40.887313 | orchestrator | | id | 7268e2a6-6ceb-404f-94ec-247997402a2e | 2025-06-22 12:37:40.887324 | orchestrator | | image | Cirros 0.6.2 (1b6d864c-f9ce-4ca4-9fdd-7b8e9c4446f1) | 2025-06-22 12:37:40.887360 | orchestrator | | key_name | test | 2025-06-22 12:37:40.887378 | orchestrator | | locked | False | 2025-06-22 12:37:40.887389 | orchestrator | | locked_reason | None | 2025-06-22 12:37:40.887400 | orchestrator | | name | test-3 | 2025-06-22 12:37:40.887418 | orchestrator | | pinned_availability_zone | None | 2025-06-22 12:37:40.887429 | orchestrator | | progress | 0 | 2025-06-22 12:37:40.887440 | orchestrator | | project_id | 917c261bada14dc5a756697c4ebe90df | 2025-06-22 12:37:40.887451 | orchestrator | | properties | hostname='test-3' | 2025-06-22 12:37:40.887469 | orchestrator | | security_groups | name='ssh' | 2025-06-22 12:37:40.887480 | orchestrator | | | name='icmp' | 2025-06-22 12:37:40.887491 | orchestrator | | server_groups | None | 2025-06-22 12:37:40.887501 | orchestrator | | status | ACTIVE | 2025-06-22 12:37:40.887512 | orchestrator | | tags | test | 2025-06-22 12:37:40.887529 | orchestrator | | trusted_image_certificates | None | 2025-06-22 12:37:40.887540 | orchestrator | | updated | 2025-06-22T12:36:19Z | 2025-06-22 12:37:40.887556 | orchestrator | | user_id | bdd165215a61467daedee98a33d681e9 | 2025-06-22 12:37:40.887587 | orchestrator | | volumes_attached | | 2025-06-22 12:37:40.891860 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 12:37:41.139542 | orchestrator | + openstack --os-cloud test server show test-4 2025-06-22 12:37:44.367608 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 12:37:44.368483 | orchestrator | | Field | Value | 2025-06-22 12:37:44.368513 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 12:37:44.368525 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 12:37:44.368537 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 12:37:44.368548 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 12:37:44.368559 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-06-22 12:37:44.368601 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 12:37:44.368613 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 12:37:44.368624 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 12:37:44.368635 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 12:37:44.368665 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 12:37:44.368686 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 12:37:44.368714 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 12:37:44.368726 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 12:37:44.368737 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 12:37:44.368748 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 12:37:44.368759 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 12:37:44.368774 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T12:35:44.000000 | 2025-06-22 12:37:44.368786 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 12:37:44.368797 | orchestrator | | accessIPv4 | | 2025-06-22 12:37:44.368808 | orchestrator | | accessIPv6 | | 2025-06-22 12:37:44.368826 | orchestrator | | addresses | auto_allocated_network=10.42.0.14, 192.168.112.181 | 2025-06-22 12:37:44.368844 | orchestrator | | config_drive | | 2025-06-22 12:37:44.368855 | orchestrator | | created | 2025-06-22T12:35:27Z | 2025-06-22 12:37:44.368866 | orchestrator | | description | None | 2025-06-22 12:37:44.368877 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 12:37:44.368888 | orchestrator | | hostId | 2c6a0a38cc8da5fb8710d351a3858d123b6b704aec864951e1610720 | 2025-06-22 12:37:44.368900 | orchestrator | | host_status | None | 2025-06-22 12:37:44.368916 | orchestrator | | id | 9f4ef58d-220e-4e97-8a16-0b14148e6ad1 | 2025-06-22 12:37:44.368928 | orchestrator | | image | Cirros 0.6.2 (1b6d864c-f9ce-4ca4-9fdd-7b8e9c4446f1) | 2025-06-22 12:37:44.368939 | orchestrator | | key_name | test | 2025-06-22 12:37:44.368950 | orchestrator | | locked | False | 2025-06-22 12:37:44.368967 | orchestrator | | locked_reason | None | 2025-06-22 12:37:44.368978 | orchestrator | | name | test-4 | 2025-06-22 12:37:44.368995 | orchestrator | | pinned_availability_zone | None | 2025-06-22 12:37:44.369006 | orchestrator | | progress | 0 | 2025-06-22 12:37:44.369017 | orchestrator | | project_id | 917c261bada14dc5a756697c4ebe90df | 2025-06-22 12:37:44.369028 | orchestrator | | properties | hostname='test-4' | 2025-06-22 12:37:44.369039 | orchestrator | | security_groups | name='ssh' | 2025-06-22 12:37:44.369050 | orchestrator | | | name='icmp' | 2025-06-22 12:37:44.369066 | orchestrator | | server_groups | None | 2025-06-22 12:37:44.369077 | orchestrator | | status | ACTIVE | 2025-06-22 12:37:44.369088 | orchestrator | | tags | test | 2025-06-22 12:37:44.369105 | orchestrator | | trusted_image_certificates | None | 2025-06-22 12:37:44.369116 | orchestrator | | updated | 2025-06-22T12:36:24Z | 2025-06-22 12:37:44.369132 | orchestrator | | user_id | bdd165215a61467daedee98a33d681e9 | 2025-06-22 12:37:44.369143 | orchestrator | | volumes_attached | | 2025-06-22 12:37:44.372715 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 12:37:44.739548 | orchestrator | + server_ping 2025-06-22 12:37:44.741308 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-22 12:37:44.741341 | orchestrator | ++ tr -d '\r' 2025-06-22 12:37:47.606113 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 12:37:47.606218 | orchestrator | + ping -c3 192.168.112.200 2025-06-22 12:37:47.621802 | orchestrator | PING 192.168.112.200 (192.168.112.200) 56(84) bytes of data. 2025-06-22 12:37:47.621861 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=1 ttl=63 time=10.5 ms 2025-06-22 12:37:48.616099 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=2 ttl=63 time=2.91 ms 2025-06-22 12:37:49.616503 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=3 ttl=63 time=1.91 ms 2025-06-22 12:37:49.616667 | orchestrator | 2025-06-22 12:37:49.616685 | orchestrator | --- 192.168.112.200 ping statistics --- 2025-06-22 12:37:49.616698 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 12:37:49.616711 | orchestrator | rtt min/avg/max/mdev = 1.911/5.105/10.493/3.831 ms 2025-06-22 12:37:49.617022 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 12:37:49.617593 | orchestrator | + ping -c3 192.168.112.199 2025-06-22 12:37:49.632138 | orchestrator | PING 192.168.112.199 (192.168.112.199) 56(84) bytes of data. 2025-06-22 12:37:49.632207 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=1 ttl=63 time=9.31 ms 2025-06-22 12:37:50.626936 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=2 ttl=63 time=2.50 ms 2025-06-22 12:37:51.628186 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=3 ttl=63 time=2.18 ms 2025-06-22 12:37:51.628309 | orchestrator | 2025-06-22 12:37:51.628324 | orchestrator | --- 192.168.112.199 ping statistics --- 2025-06-22 12:37:51.628338 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 12:37:51.628349 | orchestrator | rtt min/avg/max/mdev = 2.179/4.661/9.310/3.289 ms 2025-06-22 12:37:51.628620 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 12:37:51.628672 | orchestrator | + ping -c3 192.168.112.185 2025-06-22 12:37:51.643714 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2025-06-22 12:37:51.643799 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=10.6 ms 2025-06-22 12:37:52.636293 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.15 ms 2025-06-22 12:37:53.637406 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=2.63 ms 2025-06-22 12:37:53.637533 | orchestrator | 2025-06-22 12:37:53.637559 | orchestrator | --- 192.168.112.185 ping statistics --- 2025-06-22 12:37:53.637641 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2001ms 2025-06-22 12:37:53.637663 | orchestrator | rtt min/avg/max/mdev = 2.154/5.136/10.626/3.886 ms 2025-06-22 12:37:53.638398 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 12:37:53.638444 | orchestrator | + ping -c3 192.168.112.175 2025-06-22 12:37:53.653820 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2025-06-22 12:37:53.653881 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=11.6 ms 2025-06-22 12:37:54.646677 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=2.49 ms 2025-06-22 12:37:55.648025 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=1.91 ms 2025-06-22 12:37:55.648128 | orchestrator | 2025-06-22 12:37:55.648144 | orchestrator | --- 192.168.112.175 ping statistics --- 2025-06-22 12:37:55.648157 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 12:37:55.648169 | orchestrator | rtt min/avg/max/mdev = 1.907/5.320/11.566/4.422 ms 2025-06-22 12:37:55.648542 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 12:37:55.648605 | orchestrator | + ping -c3 192.168.112.181 2025-06-22 12:37:55.660103 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2025-06-22 12:37:55.660135 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=6.99 ms 2025-06-22 12:37:56.657277 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.52 ms 2025-06-22 12:37:57.659371 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=2.31 ms 2025-06-22 12:37:57.659474 | orchestrator | 2025-06-22 12:37:57.659489 | orchestrator | --- 192.168.112.181 ping statistics --- 2025-06-22 12:37:57.659502 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 12:37:57.659513 | orchestrator | rtt min/avg/max/mdev = 2.305/3.939/6.992/2.160 ms 2025-06-22 12:37:57.659707 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-06-22 12:37:57.904057 | orchestrator | ok: Runtime: 0:09:52.916642 2025-06-22 12:37:57.943414 | 2025-06-22 12:37:57.943510 | TASK [Run tempest] 2025-06-22 12:37:58.473471 | orchestrator | skipping: Conditional result was False 2025-06-22 12:37:58.489446 | 2025-06-22 12:37:58.489580 | TASK [Check prometheus alert status] 2025-06-22 12:37:59.022420 | orchestrator | skipping: Conditional result was False 2025-06-22 12:37:59.023761 | 2025-06-22 12:37:59.023835 | PLAY RECAP 2025-06-22 12:37:59.023912 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-06-22 12:37:59.023939 | 2025-06-22 12:37:59.191744 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-06-22 12:37:59.193397 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-22 12:37:59.842254 | 2025-06-22 12:37:59.842374 | PLAY [Post output play] 2025-06-22 12:37:59.856290 | 2025-06-22 12:37:59.856394 | LOOP [stage-output : Register sources] 2025-06-22 12:37:59.928513 | 2025-06-22 12:37:59.928726 | TASK [stage-output : Check sudo] 2025-06-22 12:38:00.762227 | orchestrator | sudo: a password is required 2025-06-22 12:38:00.964725 | orchestrator | ok: Runtime: 0:00:00.037219 2025-06-22 12:38:00.978038 | 2025-06-22 12:38:00.978199 | LOOP [stage-output : Set source and destination for files and folders] 2025-06-22 12:38:01.022383 | 2025-06-22 12:38:01.022922 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-06-22 12:38:01.099992 | orchestrator | ok 2025-06-22 12:38:01.108111 | 2025-06-22 12:38:01.108261 | LOOP [stage-output : Ensure target folders exist] 2025-06-22 12:38:01.557029 | orchestrator | ok: "docs" 2025-06-22 12:38:01.557357 | 2025-06-22 12:38:01.787266 | orchestrator | ok: "artifacts" 2025-06-22 12:38:02.039077 | orchestrator | ok: "logs" 2025-06-22 12:38:02.061559 | 2025-06-22 12:38:02.061731 | LOOP [stage-output : Copy files and folders to staging folder] 2025-06-22 12:38:02.096061 | 2025-06-22 12:38:02.096331 | TASK [stage-output : Make all log files readable] 2025-06-22 12:38:02.392558 | orchestrator | ok 2025-06-22 12:38:02.399897 | 2025-06-22 12:38:02.400028 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-06-22 12:38:02.434597 | orchestrator | skipping: Conditional result was False 2025-06-22 12:38:02.452776 | 2025-06-22 12:38:02.453015 | TASK [stage-output : Discover log files for compression] 2025-06-22 12:38:02.469047 | orchestrator | skipping: Conditional result was False 2025-06-22 12:38:02.476806 | 2025-06-22 12:38:02.476939 | LOOP [stage-output : Archive everything from logs] 2025-06-22 12:38:02.515972 | 2025-06-22 12:38:02.516145 | PLAY [Post cleanup play] 2025-06-22 12:38:02.524307 | 2025-06-22 12:38:02.524421 | TASK [Set cloud fact (Zuul deployment)] 2025-06-22 12:38:02.588385 | orchestrator | ok 2025-06-22 12:38:02.598602 | 2025-06-22 12:38:02.598715 | TASK [Set cloud fact (local deployment)] 2025-06-22 12:38:02.633313 | orchestrator | skipping: Conditional result was False 2025-06-22 12:38:02.648664 | 2025-06-22 12:38:02.648805 | TASK [Clean the cloud environment] 2025-06-22 12:38:03.478458 | orchestrator | 2025-06-22 12:38:03 - clean up servers 2025-06-22 12:38:04.215047 | orchestrator | 2025-06-22 12:38:04 - testbed-manager 2025-06-22 12:38:04.299468 | orchestrator | 2025-06-22 12:38:04 - testbed-node-3 2025-06-22 12:38:04.391019 | orchestrator | 2025-06-22 12:38:04 - testbed-node-2 2025-06-22 12:38:04.479909 | orchestrator | 2025-06-22 12:38:04 - testbed-node-0 2025-06-22 12:38:04.573670 | orchestrator | 2025-06-22 12:38:04 - testbed-node-4 2025-06-22 12:38:04.663839 | orchestrator | 2025-06-22 12:38:04 - testbed-node-1 2025-06-22 12:38:04.757881 | orchestrator | 2025-06-22 12:38:04 - testbed-node-5 2025-06-22 12:38:04.852345 | orchestrator | 2025-06-22 12:38:04 - clean up keypairs 2025-06-22 12:38:04.871317 | orchestrator | 2025-06-22 12:38:04 - testbed 2025-06-22 12:38:04.898956 | orchestrator | 2025-06-22 12:38:04 - wait for servers to be gone 2025-06-22 12:38:15.706656 | orchestrator | 2025-06-22 12:38:15 - clean up ports 2025-06-22 12:38:15.906489 | orchestrator | 2025-06-22 12:38:15 - 36c3f7b1-8bf6-4536-af4f-2534bb9b3d4a 2025-06-22 12:38:16.438001 | orchestrator | 2025-06-22 12:38:16 - a90c8a92-b4d8-4c5f-8083-358971d7744a 2025-06-22 12:38:16.673368 | orchestrator | 2025-06-22 12:38:16 - aa36239b-3abb-4b5c-8166-1697c89ff265 2025-06-22 12:38:16.925169 | orchestrator | 2025-06-22 12:38:16 - ab4c8ff7-e7bc-4fe5-ba4b-4f4476934dc9 2025-06-22 12:38:17.155379 | orchestrator | 2025-06-22 12:38:17 - c3a643b9-fa08-4473-a770-a4242e50d5b8 2025-06-22 12:38:17.360743 | orchestrator | 2025-06-22 12:38:17 - ee064403-4b62-4be7-8587-c7ea3e364c27 2025-06-22 12:38:17.569176 | orchestrator | 2025-06-22 12:38:17 - fb8cca32-ab91-4a8d-b275-cf144fd2d0c5 2025-06-22 12:38:17.780757 | orchestrator | 2025-06-22 12:38:17 - clean up volumes 2025-06-22 12:38:17.908528 | orchestrator | 2025-06-22 12:38:17 - testbed-volume-1-node-base 2025-06-22 12:38:17.948727 | orchestrator | 2025-06-22 12:38:17 - testbed-volume-0-node-base 2025-06-22 12:38:17.990689 | orchestrator | 2025-06-22 12:38:17 - testbed-volume-4-node-base 2025-06-22 12:38:18.029968 | orchestrator | 2025-06-22 12:38:18 - testbed-volume-3-node-base 2025-06-22 12:38:18.080170 | orchestrator | 2025-06-22 12:38:18 - testbed-volume-5-node-base 2025-06-22 12:38:18.130881 | orchestrator | 2025-06-22 12:38:18 - testbed-volume-2-node-base 2025-06-22 12:38:18.178669 | orchestrator | 2025-06-22 12:38:18 - testbed-volume-manager-base 2025-06-22 12:38:18.223187 | orchestrator | 2025-06-22 12:38:18 - testbed-volume-0-node-3 2025-06-22 12:38:18.266392 | orchestrator | 2025-06-22 12:38:18 - testbed-volume-5-node-5 2025-06-22 12:38:18.308432 | orchestrator | 2025-06-22 12:38:18 - testbed-volume-2-node-5 2025-06-22 12:38:18.350680 | orchestrator | 2025-06-22 12:38:18 - testbed-volume-4-node-4 2025-06-22 12:38:18.392660 | orchestrator | 2025-06-22 12:38:18 - testbed-volume-7-node-4 2025-06-22 12:38:18.435050 | orchestrator | 2025-06-22 12:38:18 - testbed-volume-3-node-3 2025-06-22 12:38:18.478951 | orchestrator | 2025-06-22 12:38:18 - testbed-volume-6-node-3 2025-06-22 12:38:18.525315 | orchestrator | 2025-06-22 12:38:18 - testbed-volume-1-node-4 2025-06-22 12:38:18.564027 | orchestrator | 2025-06-22 12:38:18 - testbed-volume-8-node-5 2025-06-22 12:38:18.606516 | orchestrator | 2025-06-22 12:38:18 - disconnect routers 2025-06-22 12:38:18.717353 | orchestrator | 2025-06-22 12:38:18 - testbed 2025-06-22 12:38:20.120553 | orchestrator | 2025-06-22 12:38:20 - clean up subnets 2025-06-22 12:38:20.171606 | orchestrator | 2025-06-22 12:38:20 - subnet-testbed-management 2025-06-22 12:38:20.334874 | orchestrator | 2025-06-22 12:38:20 - clean up networks 2025-06-22 12:38:20.508526 | orchestrator | 2025-06-22 12:38:20 - net-testbed-management 2025-06-22 12:38:20.780952 | orchestrator | 2025-06-22 12:38:20 - clean up security groups 2025-06-22 12:38:20.820439 | orchestrator | 2025-06-22 12:38:20 - testbed-node 2025-06-22 12:38:20.931990 | orchestrator | 2025-06-22 12:38:20 - testbed-management 2025-06-22 12:38:21.050223 | orchestrator | 2025-06-22 12:38:21 - clean up floating ips 2025-06-22 12:38:21.084062 | orchestrator | 2025-06-22 12:38:21 - 81.163.192.200 2025-06-22 12:38:21.428823 | orchestrator | 2025-06-22 12:38:21 - clean up routers 2025-06-22 12:38:21.536542 | orchestrator | 2025-06-22 12:38:21 - testbed 2025-06-22 12:38:23.210126 | orchestrator | ok: Runtime: 0:00:19.798703 2025-06-22 12:38:23.214263 | 2025-06-22 12:38:23.214409 | PLAY RECAP 2025-06-22 12:38:23.214511 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-06-22 12:38:23.214561 | 2025-06-22 12:38:23.364821 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-22 12:38:23.367500 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-22 12:38:24.084620 | 2025-06-22 12:38:24.084771 | PLAY [Cleanup play] 2025-06-22 12:38:24.100453 | 2025-06-22 12:38:24.100585 | TASK [Set cloud fact (Zuul deployment)] 2025-06-22 12:38:24.153254 | orchestrator | ok 2025-06-22 12:38:24.160913 | 2025-06-22 12:38:24.161042 | TASK [Set cloud fact (local deployment)] 2025-06-22 12:38:24.195367 | orchestrator | skipping: Conditional result was False 2025-06-22 12:38:24.212810 | 2025-06-22 12:38:24.213079 | TASK [Clean the cloud environment] 2025-06-22 12:38:25.361663 | orchestrator | 2025-06-22 12:38:25 - clean up servers 2025-06-22 12:38:25.846528 | orchestrator | 2025-06-22 12:38:25 - clean up keypairs 2025-06-22 12:38:25.866450 | orchestrator | 2025-06-22 12:38:25 - wait for servers to be gone 2025-06-22 12:38:25.914328 | orchestrator | 2025-06-22 12:38:25 - clean up ports 2025-06-22 12:38:25.992885 | orchestrator | 2025-06-22 12:38:25 - clean up volumes 2025-06-22 12:38:26.066489 | orchestrator | 2025-06-22 12:38:26 - disconnect routers 2025-06-22 12:38:26.095850 | orchestrator | 2025-06-22 12:38:26 - clean up subnets 2025-06-22 12:38:26.113754 | orchestrator | 2025-06-22 12:38:26 - clean up networks 2025-06-22 12:38:26.291228 | orchestrator | 2025-06-22 12:38:26 - clean up security groups 2025-06-22 12:38:26.327534 | orchestrator | 2025-06-22 12:38:26 - clean up floating ips 2025-06-22 12:38:26.352325 | orchestrator | 2025-06-22 12:38:26 - clean up routers 2025-06-22 12:38:26.762862 | orchestrator | ok: Runtime: 0:00:01.387369 2025-06-22 12:38:26.766692 | 2025-06-22 12:38:26.766944 | PLAY RECAP 2025-06-22 12:38:26.767082 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-06-22 12:38:26.767147 | 2025-06-22 12:38:26.900711 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-22 12:38:26.903332 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-22 12:38:27.736208 | 2025-06-22 12:38:27.736402 | PLAY [Base post-fetch] 2025-06-22 12:38:27.754629 | 2025-06-22 12:38:27.754807 | TASK [fetch-output : Set log path for multiple nodes] 2025-06-22 12:38:27.812940 | orchestrator | skipping: Conditional result was False 2025-06-22 12:38:27.826247 | 2025-06-22 12:38:27.826440 | TASK [fetch-output : Set log path for single node] 2025-06-22 12:38:27.884453 | orchestrator | ok 2025-06-22 12:38:27.893273 | 2025-06-22 12:38:27.893447 | LOOP [fetch-output : Ensure local output dirs] 2025-06-22 12:38:28.389389 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/2016dcad747040a4b5a9e68a0799e111/work/logs" 2025-06-22 12:38:28.675092 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/2016dcad747040a4b5a9e68a0799e111/work/artifacts" 2025-06-22 12:38:28.964360 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/2016dcad747040a4b5a9e68a0799e111/work/docs" 2025-06-22 12:38:28.981585 | 2025-06-22 12:38:28.981730 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-06-22 12:38:29.951724 | orchestrator | changed: .d..t...... ./ 2025-06-22 12:38:29.952074 | orchestrator | changed: All items complete 2025-06-22 12:38:29.952123 | 2025-06-22 12:38:30.689285 | orchestrator | changed: .d..t...... ./ 2025-06-22 12:38:31.459764 | orchestrator | changed: .d..t...... ./ 2025-06-22 12:38:31.482909 | 2025-06-22 12:38:31.483073 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-06-22 12:38:31.520522 | orchestrator | skipping: Conditional result was False 2025-06-22 12:38:31.524696 | orchestrator | skipping: Conditional result was False 2025-06-22 12:38:31.541160 | 2025-06-22 12:38:31.541321 | PLAY RECAP 2025-06-22 12:38:31.541411 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-06-22 12:38:31.541454 | 2025-06-22 12:38:31.692293 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-22 12:38:31.694141 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-22 12:38:32.446417 | 2025-06-22 12:38:32.446595 | PLAY [Base post] 2025-06-22 12:38:32.461362 | 2025-06-22 12:38:32.461512 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-06-22 12:38:33.725022 | orchestrator | changed 2025-06-22 12:38:33.734562 | 2025-06-22 12:38:33.734700 | PLAY RECAP 2025-06-22 12:38:33.734776 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-06-22 12:38:33.734943 | 2025-06-22 12:38:33.862945 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-22 12:38:33.863977 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-06-22 12:38:34.663007 | 2025-06-22 12:38:34.663186 | PLAY [Base post-logs] 2025-06-22 12:38:34.674012 | 2025-06-22 12:38:34.674151 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-06-22 12:38:35.150051 | localhost | changed 2025-06-22 12:38:35.176007 | 2025-06-22 12:38:35.176333 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-06-22 12:38:35.217664 | localhost | ok 2025-06-22 12:38:35.224817 | 2025-06-22 12:38:35.225021 | TASK [Set zuul-log-path fact] 2025-06-22 12:38:35.244026 | localhost | ok 2025-06-22 12:38:35.258870 | 2025-06-22 12:38:35.259041 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-22 12:38:35.287204 | localhost | ok 2025-06-22 12:38:35.294092 | 2025-06-22 12:38:35.294272 | TASK [upload-logs : Create log directories] 2025-06-22 12:38:35.841123 | localhost | changed 2025-06-22 12:38:35.844032 | 2025-06-22 12:38:35.844152 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-06-22 12:38:36.344993 | localhost -> localhost | ok: Runtime: 0:00:00.007382 2025-06-22 12:38:36.349270 | 2025-06-22 12:38:36.349390 | TASK [upload-logs : Upload logs to log server] 2025-06-22 12:38:36.947080 | localhost | Output suppressed because no_log was given 2025-06-22 12:38:36.951637 | 2025-06-22 12:38:36.951826 | LOOP [upload-logs : Compress console log and json output] 2025-06-22 12:38:37.006824 | localhost | skipping: Conditional result was False 2025-06-22 12:38:37.012038 | localhost | skipping: Conditional result was False 2025-06-22 12:38:37.024958 | 2025-06-22 12:38:37.025208 | LOOP [upload-logs : Upload compressed console log and json output] 2025-06-22 12:38:37.073876 | localhost | skipping: Conditional result was False 2025-06-22 12:38:37.074518 | 2025-06-22 12:38:37.078181 | localhost | skipping: Conditional result was False 2025-06-22 12:38:37.091811 | 2025-06-22 12:38:37.092222 | LOOP [upload-logs : Upload console log and json output]